Carl Benedikt Frey Says More…
Project Syndicate: A recent MIT study found that billions of dollars of investment in AI pilots are yielding no returns, invigorating discussions about the technology’s limits, with some warning of a bubble. Your new book, How Progress Ends: Technology, Innovation, and the Fate of Nations, casts doubt on grand claims about AI’s potential and points out that rapid technological change has often led to destabilization or stagnation. How do you explain the underperformance of many AI investments?
Carl Benedikt Frey: While I’m not sure about that particular MIT study, I do know that AI has a resilience problem. In 2016, when AlphaGo triumphed over Go grandmaster Lee Sedol – winning four of five games in the series – the game of Go looked “solved.” But by 2023, a human amateur defeated state-of-the-art Go AIs using a basic PC by steering the program into situations it hadn’t seen in training.
Large-language models excel at tasks resembling their training data, but stumble when faced with genuinely new problems. Even advanced “reasoning” systems flounder on simple, novel puzzles, such as the Abstraction and Reasoning Corpus for Artificial General Intelligence v2 (ARC-AGI-2), which tests and evaluates the advanced problem-solving capabilities of AI models. When Anthropic let its Claude assistant run a tiny vending-machine business, the AI issued irrational discounts, was manipulated into bad purchases, drifted into fiction, and the experiment quickly went “bankrupt.” Real firms face far more turbulence.
There is a deeper lesson here. If the world were a static distribution, progress would require nothing more than more data and compute: observe everything, perform well everywhere. But the world changes constantly. Durable progress demands algorithmic innovation – methods that generalize and recover when plans fail – not just bigger models.
This is where institutions come in. Resilience requires experimentation; centralization throttles it. Early imperial China used technology – standardized writing, grand public works, soil mapping (for taxation) – to extend the bureaucracy’s reach. But while this approach delivered impressive early gains, it later crowded out grassroots experimentation.
In Europe, by contrast, fragmentation enabled ideas and inventors to operate more freely – such as by moving among rival patrons – sustaining discovery. Similarly, the internet prospered in the 1990s because no single gatekeeper could dictate terms. After the 1984 breakup of AT&T, no monopoly could choke experimentation, and open standards kept interoperability high and switching easy. The message is clear: if AI becomes a tool of surveillance-led control, modern China – and any polity that follows the same script – risks trading short-run efficiency for long-run stagnation.
So, the underperformance of investments isn’t proof that AI is hype; it’s evidence that investments have preceded resilience. Addressing it demands innovation and pro-competitive rules. If we replicate the early internet’s structure – guardrails without gatekeepers – we will keep entry open, avoid bottlenecks, and turn pilots into productivity. If we don’t, incumbent power may convert investment into monopoly rents, not progress.