Interpreting Context: AI's Next Frontier in Complex Decision-Making

Interpreting Context: AI's Next Frontier in Complex Decision-Making

The issue of what needs to be in place to actually have confidence in AI that is applied for tasks in the real world, making autonomous decisions and perform actions, is definitely more complex than current public sentiment seems to show.

Being working with our AI Lab on topics like this, it is our task to scrutinise and test hypotheses and findings around such applications of state of the art technology.

For years I have used an image of an autonomous vehicle crashing in it's own brand sign as illustration of the fact that self driving technology has become real, but doesn't stand the test of "real world contexts" yet. If we look away from early tests in the 40´s and the 50´s, the first more serious self driving cars were on the road in the 80´s. That is 40 years ago, and in the meanwhile we got internet.

If you take a step back from optimism and the "wish that it works reliably", the reality shows difficulties for many fully autonomous applications of Artificial Intelligence out there. Especially for complex tasks like driving a car, making decisions on social, political or medical tasks, we often advise to keep a human in the loop. And rightly so!

There are two observations to be made: on the one hand, people are still the experts in interpreting context in the real-world and in complex situations. Taking into consideration the holistic view around one, capturing all factors, but also the chronology leading up to the situation and all clues around that you which one can intuitively interpret and use for super fast decision making about potential danger is something that needs way more research. But also the anticipation and perception of how a specific action influences the real world world in the (nearby) future are tasks not very easily performed by current autonomous systems (like self driving cars).

A recent article in NYT, referred to below, shows that self driving cars seem smooth when we take them, but it is a Turing test that we do not pass. It shows that one of those companies (Cruise, backed by General Motors), reportedly depends on 1,5 worker per car driving autonomously, who intervenes on average every 1,5-5 miles of driving. You do not always notice it when you are in these cars, and getting flabbergasted by the experience is easy. When I took a such car (Waymo) in San Francisco last month, we were at least interacting with a remote company employee 2-3 times during a single ride, showing that these workers indeed monitor and control the behaviour of these seemingly self-driving cars.

Therefore, to advance AI, our focus and research must pivot towards integrating current deep learning technologies with model-based approaches that not only compute outcomes through mathematical models, physical engines, and logical reasoning but also explain their conclusions. Furthermore, we should aim to enhance AI's perception of the environment and context, allowing it to abstract, interpret, and predict possible future events.

Only by shifting our focus to these aspects can we hope to achieve the promise of Artificial Intelligence in the real world.

(Artificial Intelligence solves many complex tasks already today, and the world can really expect much of the technology. It is just not my believe that current technology and it's embedding can be left alone out there (yet).)

https://www.nytimes.com/2023/11/03/technology/cruise-general-motors-self-driving-cars.html

Rick Sladkey

Computer Scientist | Independent AI researcher and enthusiast | Open source author and advocate | Creator of Meta-Query | Linking deep academic theory to practical technical applications | Capgemini

11mo

My own experience with my one free month of Tesla FSD, and my ongoing experience with basic auto pilot led me to appreciate “assisted driving” or “hybrid cruise control” as cognitive enhancement by allowing an operator (who is just as equally engaged cognitively) to focus on situational awareness vs. lower order tasks such as staying in the lane.

Like
Reply
Thordur Arnason

VP Capgemini Invent, Global AI GTM Lead

1y

Good piece! It makes me think over why we are so focused on looping out humans when there is so much to gain by taking the human in the loop route. Douglas Engelbart wrote in his seminal 1962 paper, "Augmenting human intellect: a conceptual framework" about designing systems that leverage the strengths of both humans and machines, creating a symbiotic relationship that would significantly enhance our problem-solving capabilities. It's a powerful reminder that the ultimate goal of technology should be to amplify our abilities, not to make us obsolete. The advancements in generative AI, for example, have shown us that machines can enhance our creativity, help us to model complex scenarios, and even anticipate problems, but they still rely on the unique insights and the contextual understanding that only humans can provide. As we continue to innovate, the wisdom from Engelbart's work serves as a guiding principle. Let's remember the value that human experience and insight bring to our collective intelligence. After all, it's the human-centric approach that has historically led to the most meaningful breakthroughs in our society.

Joppe Montezinos

Software/ Machine learning Engineer | Passionate Learner | Creative Problem Solver

1y

Really interesting read! 👨💻

Like
Reply
Peter van der Putten

Using trustworthy AI to create impact in business, society, arts & science | Director Pega AI Lab | Assistant professor Artificial X,Leiden University

1y

Nice post! Real time context is also a good reason why automated decisioning needs to be real-time - it needs to be able to take that context into account. For instance, what is the reason for a customer service call, what did a customer just do on the website or in a mobile app, what is the current state of a case or a process.

To view or add a comment, sign in

Others also viewed

Explore content categories