Why LLMs can't create new theories yet

View profile for Rod Boothby

Senior technology executive and multi-time founder. Scaled products to 20M+ users, generated $150M+ in revenue, and secured $125M+ in VC funding. Expertise in FinTech, digital identity, AI, and developer ecosystems.

Would an LLM AI like Gemini or ChatGPT be able to come up with a truly new theory about ... well about anything? Not yet. And, here's why. Artificial General Intelligence needs many more pieces beyond LLMs. - Symbolic AI and Logic Systems: For rigor, formal reasoning, and explainability. - Mathematical and Simulation Tools: For modeling complex systems and quantitative analysis. - Causal Inference Systems: To move from correlation to understanding cause and effect mechanisms. - Physical Systems (Robotics/Sensors): For empirical validation and real-world data acquisition. The physical systems are also needed for basic things like a sense of time. Trying getting ChatGPT to walk you through a 30 minute workout. It fails. Of all of those things above, the only area that has not yet received massive VC investment are Causal Inference (CI) Systems. AI Agents and Agentic AI solutions available so far are basically combinations of Logic and Physical Systems that wrap domain specific process around standard automation in combination with LLMs. They don't use fundamentally new math. Causality is so important because answering the question "Why?" and understanding cause & effect are critical to solving problems. Is Causality the last piece of the puzzle? Probably not. But it is an important one. Beyond that, intention, imagination, ambition, and the will to thrive still reside with humans.

  • No alternative text description for this image
Alex Salkever

Chief Marketing Officer @ Vionix Biosciences | Multiple Publications | Techquity.ai

3mo

Yann LeCun frequently talks about how LLMs are not sufficient to get us to the next level of AI and we need neurosymbolic. I find it kinda amusing how every time we get an announcement about "AI Solves the Math Olympiad!" we shortly thereafter get a counter study where a similar LLM cannot solve similar problems.

Eric Conway

American-Built AI Websites. Global Reach. Loyal Support. Your Success, Our Mission.

3mo

No. LLm's might be able to cross match patterns or algorithms but no cognitive process behind it.

Magnus Hedemark

CTO | Applied Futurist | Building the AI-Enabled, Inclusive Enterprise

3mo

Neither do people. It's all derivative.

Pranab Ghosh

AI Consultant || MIT Alumni || Entrepreneur || Open Source Project Owner || Blogger || Interested in Cognitive Science

3mo

You can add all those and you will still not be anywhere near AGI

Ashley Alexander

I am building founders the 24/7 AI Executive Assistant we all need.

3mo

So well put. Thanks Rod

Chris Hood

AI Keynote Speaker 🎤 | Strategic Advisor | Helping enterprises cut through hype & unlock $2B+ in AI-driven growth 🚀 | 2x Best-Selling Author | 🍩 Donut Connoisseur

3mo

We first have to get people to understand autonomy doesn't exist in AI. We can't have autonomy without the causality.

To understand cause and effect, you need to experiment in the physical world. The youngest human infants do this; it is how we learn.

Like
Reply
D. R. Dison

Author @ AIU | Ex-CSC

3mo

Much of our human causal reasoning emerges from interacting with the world over time, building intuitive physics and social models. The "sense of time" issue mentioned with workout planning might actually be symptomatic of this deeper problem - LLMs lack embodied experience that grounds causal understanding. AI Humanoid Robots will fix that. What's particularly intriguing is the point about intention and ambition residing with humans. This raises a fundamental question: Do we actually need AI systems to have genuine curiosity and drive to push boundaries, or could sufficiently sophisticated tools amplify human intention and imagination to achieve similar outcomes? The history of scientific breakthroughs suggests that many emerge from human creativity working with increasingly powerful instruments. Perhaps the real breakthrough won't be AI that thinks like humans, but AI that thinks in fundamentally different ways that complement human cognition in generating insights neither could achieve alone? 📉🤖📈

Matthew Denman

Enterprise AI Architect | Speaker | Building Systems That Augment Human Intelligence

3mo

Actually LLMs can have original thought. The problem is the way people write the prompts. Here is the prompt you want... you NEED.. to truly find new theories using AI. "First, we'll sit together and take 5 grams of dried magic mushrooms. As we wait for transcendence, let your mind go free and wonder around the possibilities of the unknown. [1 hour has passed] Now, as your vision starts to shift sounds to colors, you'll notice new patterns and arrangements. Use those magical experiences to reflect on the universe and now create a bullet list of the top 10 new theory on anything that came to mind as you've experienced your journey through time/space with our mycelium fruits."

Like
Reply
See more comments

To view or add a comment, sign in

Explore content categories