Are LLM advancements slowing? When GPT-5 didn't initially blow everyone's mind the narrative quickly became "𝘈𝘐 𝘱𝘳𝘰𝘨𝘳𝘦𝘴𝘴 𝘪𝘴 𝘱𝘭𝘢𝘵𝘦𝘢𝘶𝘪𝘯𝘨"... that LLM R&D is hitting a wall. We went from waiting 18 months for massive GPT breakthroughs to getting incremental improvements much more frequently. The conditioning of our expectations, however, is confusing reality. Lenny Rachitsky recently interviewed Benjamin Mann (Anthropic co-founder) on his pod and they touched on this perception (link in comments). GPT-3.5 to GPT-4 felt earth-shattering because we waited so long. Now we get GPT-4 Mini, use-case specific models, multimodal improvements, and fine-tuned variants dropping constantly. This is more mature product dev, not slower innovation velocity, and this is what you want if you’re building something real on top of these models. Think about it... would you rather: 1) wait a year for one massive update with wildly new capabilities? 𝗢𝗥 2) get steady, relatively predictable improvements you can build on? While the pace of AI research continues to accelerate, the market hasn't adjusted its expectations to more frequent release cycles, as we seem to expect massive, paradigm-shifting advancements every quarter. Foundational models have already crossed the chasm, as the mainstream adopts AI, and the only thing changed is frequency of releases. The AI magic isn't gone, it's just getting incrementally more practical.
I also don’t think GPT-5 follows prompts nearly as well.
Pro Rollercoaster Rider | Co-founder + CEO @ Alto Studios | Strategic Communications + Content | Serial Founder
3wHere's a link to the ep of Lenny's pod where he interviews Ben... great conversation, worth checking out: https://youtu.be/WWoyWNhx2XU?si=yGye2829f3Bn1RYn