Would an LLM AI like Gemini or ChatGPT be able to come up with a truly new theory about ... well about anything? Not yet. And, here's why. Artificial General Intelligence needs many more pieces beyond LLMs. - Symbolic AI and Logic Systems: For rigor, formal reasoning, and explainability. - Mathematical and Simulation Tools: For modeling complex systems and quantitative analysis. - Causal Inference Systems: To move from correlation to understanding cause and effect mechanisms. - Physical Systems (Robotics/Sensors): For empirical validation and real-world data acquisition. The physical systems are also needed for basic things like a sense of time. Trying getting ChatGPT to walk you through a 30 minute workout. It fails. Of all of those things above, the only area that has not yet received massive VC investment are Causal Inference (CI) Systems. AI Agents and Agentic AI solutions available so far are basically combinations of Logic and Physical Systems that wrap domain specific process around standard automation in combination with LLMs. They don't use fundamentally new math. Causality is so important because answering the question "Why?" and understanding cause & effect are critical to solving problems. Is Causality the last piece of the puzzle? Probably not. But it is an important one. Beyond that, intention, imagination, ambition, and the will to thrive still reside with humans.
No. LLm's might be able to cross match patterns or algorithms but no cognitive process behind it.
Neither do people. It's all derivative.
You can add all those and you will still not be anywhere near AGI
So well put. Thanks Rod
We first have to get people to understand autonomy doesn't exist in AI. We can't have autonomy without the causality.
To understand cause and effect, you need to experiment in the physical world. The youngest human infants do this; it is how we learn.
Much of our human causal reasoning emerges from interacting with the world over time, building intuitive physics and social models. The "sense of time" issue mentioned with workout planning might actually be symptomatic of this deeper problem - LLMs lack embodied experience that grounds causal understanding. AI Humanoid Robots will fix that. What's particularly intriguing is the point about intention and ambition residing with humans. This raises a fundamental question: Do we actually need AI systems to have genuine curiosity and drive to push boundaries, or could sufficiently sophisticated tools amplify human intention and imagination to achieve similar outcomes? The history of scientific breakthroughs suggests that many emerge from human creativity working with increasingly powerful instruments. Perhaps the real breakthrough won't be AI that thinks like humans, but AI that thinks in fundamentally different ways that complement human cognition in generating insights neither could achieve alone? 📉🤖📈
Actually LLMs can have original thought. The problem is the way people write the prompts. Here is the prompt you want... you NEED.. to truly find new theories using AI. "First, we'll sit together and take 5 grams of dried magic mushrooms. As we wait for transcendence, let your mind go free and wonder around the possibilities of the unknown. [1 hour has passed] Now, as your vision starts to shift sounds to colors, you'll notice new patterns and arrangements. Use those magical experiences to reflect on the universe and now create a bullet list of the top 10 new theory on anything that came to mind as you've experienced your journey through time/space with our mycelium fruits."
Chief Marketing Officer @ Vionix Biosciences | Multiple Publications | Techquity.ai
3moYann LeCun frequently talks about how LLMs are not sufficient to get us to the next level of AI and we need neurosymbolic. I find it kinda amusing how every time we get an announcement about "AI Solves the Math Olympiad!" we shortly thereafter get a counter study where a similar LLM cannot solve similar problems.