AI’s Clock Speed is Accelerating This chart from METR tells a profound story: The length of tasks AI can complete at a 50% success rate is doubling every 7 months. What started with GPT-2 answering questions in seconds has scaled to today’s frontier models (GPT-4, GPT-4o, Sonnet 3.7) handling tasks like training classifiers or building robust image models—things that once took humans hours. The implications are staggering: Productivity Compression → Work that took months to execute may soon take minutes. Capability Compounding → Each model builds on the last, accelerating discovery and application. Strategic Urgency → Enterprises and governments don’t just need AI roadmaps—they need adaptive AI operating systems that evolve with this pace. But speed also sharpens risks. As models race forward, governance, safety, and resilience must scale at the same rate—or faster. This is the defining paradox of our age: AI is compounding like Moore’s Law on steroids, but our institutions move at human speed. The question isn’t whether AI can keep doubling—it's whether leadership, governance, and society can keep up. 👉 What do you think? Are we ready for this pace, or is the gap widening too fast?
AI's Clock Speed: Doubling Every 7 Months
More Relevant Posts
-
AI’s Clock Speed is Accelerating This chart from METR tells a profound story: The length of tasks AI can complete at a 50% success rate is doubling every 7 months. What started with GPT-2 answering questions in seconds has scaled to today’s frontier models (GPT-4, GPT-4o, Sonnet 3.7) handling tasks like training classifiers or building robust image models—things that once took humans hours. The implications are staggering: Productivity Compression → Work that took months to execute may soon take minutes. Capability Compounding → Each model builds on the last, accelerating discovery and application. Strategic Urgency → Enterprises and governments don’t just need AI roadmaps—they need adaptive AI operating systems that evolve with this pace. But speed also sharpens risks. As models race forward, governance, safety, and resilience must scale at the same rate—or faster. This is the defining paradox of our age: AI is compounding like Moore’s Law on steroids, but our institutions move at human speed. The question isn’t whether AI can keep doubling—it's whether leadership, governance, and society can keep up. 👉 What do you think? Are we ready for this pace, or is the gap widening too fast?
To view or add a comment, sign in
-
-
AI’s Clock Speed is Accelerating This chart from METR tells a profound story: The length of tasks AI can complete at a 50% success rate is doubling every 7 months. What started with GPT-2 answering questions in seconds has scaled to today’s frontier models (GPT-4, GPT-4o, Sonnet 3.7) handling tasks like training classifiers or building robust image models—things that once took humans hours. The implications are staggering: Productivity Compression → Work that took months to execute may soon take minutes. Capability Compounding → Each model builds on the last, accelerating discovery and application. Strategic Urgency → Enterprises and governments don’t just need AI roadmaps—they need adaptive AI operating systems that evolve with this pace. But speed also sharpens risks. As models race forward, governance, safety, and resilience must scale at the same rate—or faster. This is the defining paradox of our age: AI is compounding like Moore’s Law on steroids, but our institutions move at human speed. The question isn’t whether AI can keep doubling—it's whether leadership, governance, and society can keep up. 👉 What do you think? Are we ready for this pace, or is the gap widening too fast?
To view or add a comment, sign in
-
-
The most significant leap forward in AI isn't about making LLMs faster. It's about giving them permission to be slower. This sounds backward, right? Our entire industry is built on the religion of low latency. We've been trained to believe that speed is the ultimate metric of success. But that model is breaking. We're discovering that LLMs produce their most profound work not through sheer computational speed, but through a process of digital contemplation. When we build systems that allow for iteration, self-correction, and reasoning, we unlock a new level of insight that a rushed, first-pass answer could never achieve. We’ve been trying to apply assembly-line metrics to what is essentially a creative process. It's time for a new definition of "performance" in the age of AI. What does high-performance AI look like to you? #AI #LLMs #Innovation #FutureOfTech #Performance
To view or add a comment, sign in
-
Lately, I’ve been following a few developments in AI that stand out—not just because of the headlines, but because of what they signal for where things are heading. First, there’s a real shift happening with the latest GPT models. It’s not just about making chatbots that “sound” smarter, but about building systems that can actually reason—linking ideas, making logical leaps, and holding up over complex tasks. In many ways, this is what we’ve been waiting for: AI that doesn’t just talk, but actually thinks through problems. The implications for research, legal analysis, and any industry relying on good decision-making are huge. Then there’s what DeepMind achieved with protein folding. I find this fascinating because it highlights AI as a driver for scientific progress. Predicting protein structures used to be a painstaking process. Now it’s moving at an entirely new pace, which accelerates advances in medicine and biology. To me, that’s proof that AI’s purpose isn’t just automation—the real promise is in enabling discoveries humans alone might not reach. And finally, the way AI assistants are making their way into everyday enterprise tools deserves attention. The integration of systems like Copilot into familiar platforms isn’t just a technical update—it’s changing how people work, make decisions, and share knowledge. But it also makes questions about data, ethics, and trust more important than ever. Taken together, these trends are reminders that AI is rapidly moving from the lab into the heart of work and society. There’s a lot to be excited about, but maybe even more to think through carefully as we go.
To view or add a comment, sign in
-
The real AI challenges are more "people & process" than technology. IME AI experiments "fail" because they're designed around existing hierarchical structures, and executive sponsors are (understandably) generally unwilling to rock the corporate boat; once you've got a "10x process improvement quick win," the logical next step is to zoom out and reengineer the organization's fundamental CX/organizational structure/workers' economic incentives—legacy implementations, which include entire departments with specialized bureaucratic functions, are rarely fit for purpose in this new context. Until there's enough pain to drive fundamental organizational change, we'll keep seeing expensive AI theater instead of transformation. The technology demands flatter, more agile structures between executive decision-making and customer interaction; most (but not all!) enterprises are not quite ready to confront the particulars that get you from here to there. (Above is my $0.02; Harvard Business Review article, linked, resonated with my own firsthand experience—and with what I've heard from other credible applied AI practitioners—and is worth reading in its entirety.) https://lnkd.in/eke28jMC
To view or add a comment, sign in
-
This would measure the maturity of IT governance, organizational culture, innovation appetite and if IT is a strategic partner or just service delivery.
AI Adoption Expert | Fmr. MIT AI Co-Chair | Helping Leaders Execute 10x Faster | ex-Red Bull, -Arterys (acq. by Tempus AI, NASDAQ:TEM), -ARPA-H AI Advisor
The real AI challenges are more "people & process" than technology. IME AI experiments "fail" because they're designed around existing hierarchical structures, and executive sponsors are (understandably) generally unwilling to rock the corporate boat; once you've got a "10x process improvement quick win," the logical next step is to zoom out and reengineer the organization's fundamental CX/organizational structure/workers' economic incentives—legacy implementations, which include entire departments with specialized bureaucratic functions, are rarely fit for purpose in this new context. Until there's enough pain to drive fundamental organizational change, we'll keep seeing expensive AI theater instead of transformation. The technology demands flatter, more agile structures between executive decision-making and customer interaction; most (but not all!) enterprises are not quite ready to confront the particulars that get you from here to there. (Above is my $0.02; Harvard Business Review article, linked, resonated with my own firsthand experience—and with what I've heard from other credible applied AI practitioners—and is worth reading in its entirety.) https://lnkd.in/eke28jMC
To view or add a comment, sign in
-
Great snippet: “The real opportunity—the one that will actually generate returns—is to look carefully at your internal operations and the external customer journey and start with how you can create real value, in the near term, using AI tools.”
AI Adoption Expert | Fmr. MIT AI Co-Chair | Helping Leaders Execute 10x Faster | ex-Red Bull, -Arterys (acq. by Tempus AI, NASDAQ:TEM), -ARPA-H AI Advisor
The real AI challenges are more "people & process" than technology. IME AI experiments "fail" because they're designed around existing hierarchical structures, and executive sponsors are (understandably) generally unwilling to rock the corporate boat; once you've got a "10x process improvement quick win," the logical next step is to zoom out and reengineer the organization's fundamental CX/organizational structure/workers' economic incentives—legacy implementations, which include entire departments with specialized bureaucratic functions, are rarely fit for purpose in this new context. Until there's enough pain to drive fundamental organizational change, we'll keep seeing expensive AI theater instead of transformation. The technology demands flatter, more agile structures between executive decision-making and customer interaction; most (but not all!) enterprises are not quite ready to confront the particulars that get you from here to there. (Above is my $0.02; Harvard Business Review article, linked, resonated with my own firsthand experience—and with what I've heard from other credible applied AI practitioners—and is worth reading in its entirety.) https://lnkd.in/eke28jMC
To view or add a comment, sign in
-
“…The technology demands flatter, more agile structures between executive decision-making and customer interaction; most (but not all!) enterprises are not quite ready to confront the particulars that get you from here to there…” Christian Ulstrup #ai #management #digitaltransformation
AI Adoption Expert | Fmr. MIT AI Co-Chair | Helping Leaders Execute 10x Faster | ex-Red Bull, -Arterys (acq. by Tempus AI, NASDAQ:TEM), -ARPA-H AI Advisor
The real AI challenges are more "people & process" than technology. IME AI experiments "fail" because they're designed around existing hierarchical structures, and executive sponsors are (understandably) generally unwilling to rock the corporate boat; once you've got a "10x process improvement quick win," the logical next step is to zoom out and reengineer the organization's fundamental CX/organizational structure/workers' economic incentives—legacy implementations, which include entire departments with specialized bureaucratic functions, are rarely fit for purpose in this new context. Until there's enough pain to drive fundamental organizational change, we'll keep seeing expensive AI theater instead of transformation. The technology demands flatter, more agile structures between executive decision-making and customer interaction; most (but not all!) enterprises are not quite ready to confront the particulars that get you from here to there. (Above is my $0.02; Harvard Business Review article, linked, resonated with my own firsthand experience—and with what I've heard from other credible applied AI practitioners—and is worth reading in its entirety.) https://lnkd.in/eke28jMC
To view or add a comment, sign in
-
AI projects don’t fail because AI doesn’t work. They fail because leaders treat generative AI like legacy IT. The path to the 5% that win is clear: agent-first design, unit-economics discipline, and human-AI collaboration. Those who act now can turn pilots into platforms and build a compounding advantage quarter after quarter. https://lnkd.in/g58_Gjtq
To view or add a comment, sign in
-
Everyone’s talking about AI like it’s either a mythical savior or a scary sci-fi villain. But the reality? It’s way more nuanced. Nearly 4 in 5 companies now use AI in some form — up from just over half a year ago. And yet 95% of those investing in generative AI haven’t seen real profit payoff. So what does that tell us? Not that AI is overhyped. But that most AI isn’t transforming work. It’s not helping people become better at what they do. That’s where RNMKRs is different. We don’t use AI to replace human interaction — we use it to preserve it. Our simulations adapt to what each learner says in real time. No scripts. No static prompts. Just active, responsive practice. So when the real moment arrives, students and reps aren’t trying to remember lines. They’re present. Grounded. Human. Because adaptive AI isn’t about automation. It’s about readiness. It’s about connection. It’s about confidence. Practice the hard. Show up human. Sources: Business Insider, McKinsey & Company, Hostinger, Reuters (2024)
To view or add a comment, sign in
-