Did I Just Scoop Sam Altman on "The Gentle Singularity"? A Modest Observation About Predicting AI's Future
On June 10, 2025, Sam Altman published a blog post titled "The Gentle Singularity" that has been making waves across the AI community. In it, the OpenAI CEO argues that while we're experiencing a technological revolution, it's unfolding more gradually than many expected—"the singularity happens bit by bit, and the merge happens slowly."
I have to admit, reading Altman's post gave me a distinct sense of déjà vu. You see, just a few months earlier—on February 20, 2025—I published an article on LinkedIn entitled, "Why Artificial General Intelligence Will Be Both Revolutionary and Underwhelming," making remarkably similar points.
Now, I'm sure Sam Altman has better things to do than scroll through LinkedIn articles by patent attorneys (shocking, I know). But the convergence of our thinking suggests something important: a growing consensus is emerging that the dramatic "overnight robot uprising" scenarios are giving way to a more nuanced understanding of how transformative AI will actually unfold.
Great Minds Think Alike (Or So I'd Like to Think)
Let me compare our perspectives—not to claim any credit, but to highlight how this more measured view of AI's impact is gaining traction:
On the pace of change, Altman writes: "From a relativistic perspective, the singularity happens bit by bit, and the merge happens slowly." I argued: "The revolution will be profound but gradual, transformative but incomplete."
On daily life persisting, Altman notes: "In the most important ways, the 2030s may not be wildly different. People will still love their families, express their creativity, play games, and swim in lakes." I made a similar point: "Each of these revolutions transformed society profoundly while simultaneously leaving many aspects of human life remarkably stable."
On human adaptation and new roles, Altman predicts we'll "figure out new things to do and new things to want" and that future jobs will "feel incredibly important and satisfying to the people doing them." I similarly argued we can expect "new industries and job categories we can't yet imagine" and emphasized developing "frameworks for human-AGI collaboration."
On uneven impact, while Altman focuses on the self-reinforcing nature of AI progress, I emphasized what I called the "jagged pattern of AGI impact"—how different sectors will transform at different rates and in different ways.
Where We Diverge (And Why It Matters)
Our approaches do differ in interesting ways. Altman focuses more on the positive potential and specific technological progressions, painting a picture of exponential improvement driven by AI helping to build better AI. I took a more historically grounded approach, examining how previous technological revolutions actually unfolded and identifying the various constraints that will likely moderate even AGI's impact.
Altman is more optimistic about timelines and capabilities—predicting agents that can do "real cognitive work" in 2025, "novel insights" in 2026, and physical world robots in 2027. I focused more on why even highly capable systems will face real-world limitations: "Intelligence alone isn't sufficient for many real-world tasks... the actual execution of these actions would still face the same physical constraints we deal with today."
But these differences complement rather than contradict each other. Altman brings the insider's perspective on what's technically possible, while I bring a historical perspective on how societies actually adapt to transformative technologies.
The Real Story: A Maturing Conversation About AI
The more significant story here isn't about who said what first (though I reserve the right to a modest "I told you so"). It's that we're witnessing a maturation in how thoughtful people discuss AI's future impact.
The conversation is shifting away from science fiction scenarios—both utopian and dystopian—toward more nuanced analyses of how artificial intelligence will actually integrate with human societies, institutions, and daily life. This isn't because AI is less powerful than we thought, but because we're developing a more sophisticated understanding of how transformative technologies actually transform things.
As I noted in my February article, "We already live in a world full of human experts in every field. These experts possess general intelligence and deep specialized knowledge. Yet their existence hasn't made other humans obsolete." AGI will face similar patterns—extraordinary capabilities constrained by economic, physical, and social realities.
Why This Consensus Matters
This emerging consensus—that AI transformation will be profound but gradual, revolutionary but uneven—has important implications for how we prepare:
Both Altman and I agree on perhaps the most important point: the future won't arrive in a single dramatic moment. As Altman puts it, we're climbing "the long arc of exponential technological progress; it always looks vertical looking forward and flat going backwards, but it's one smooth curve."
The Gentle Revolution Continues
Whether you call it "The Gentle Singularity" or describe AGI as "Revolutionary and Underwhelming," the core insight remains the same: the most profound technological revolution in human history will likely unfold in ways that feel both extraordinary and surprisingly ordinary.
The future with AGI will be revolutionary—just not in the way Hollywood imagined. And perhaps that's exactly the kind of future we need: one that transforms society while preserving what makes us human.
P.S. Sam, if you're reading this, I'm available for collaboration on future blog posts. I've got some thoughts about AI and the patent system that might interest you...
Read the original articles:
Economist
1moActually codiscovrry happens a lot. Even for big ideas I’m an economist two famous examples in economics- the marginal revolution Jevons England 1871 Menger Austria 1871 Walras France 1874 The Capital Asset Pricing Model (CAPM) early 1960s. Sharpe Treynor Lintner Mossin
President / Published Author
1moAI, even AGI or ASI has a lot of limitations. I think the world should wait to build AI data centers at scale until the technology is much lower cost and power. And 100% accurate and safe to humans. To see and understand what I mean, please purchase and read my book Silicon Trenches: Dial-up to AI. Building it as we fly it. My next book is fiction about ASI robots in the future. It will blow your mind. https://a.co/d/7POqbs6
Retired Transmission System Reliability Operations
1moAI is a TOOL that can help our world better manage issues that we previously did not have early the capability to fully understand and, therefore, we’re not able to manage….first example: the human genome…AI is allowing science to better understand how our DNA works but also create effective treatments! Now, think about the phenomenal complexity of a Hurricane…what if a powerful AI could properly model the creation and formation of a Tropical storm & understand exactly the transition into a Hurricane…& we find a way to minimize the size & strength in a way that we Don’t damage our Earth but, instead manage the effects to maintain climate harmony…same thing with managing Tornadoes!
--
1moThanks for sharing
➥ TEDx Speaker · Patent Lawyer · Leader · PI on $5M+ Innovation Grants ➥ Available for Speaking Engagements on Innovation & Leadership
1moWhen Ray Kurzweil wrote “the singularity is near” 20 years ago (😱) i would’ve been shocked at the real singularity being underwhelming or even subtle. Still, it makes sense to me, especially when the interface of the enabling technology is more and more like us. It’s funny, but humans only seem to comprehend when looking backwards, maybe that’s why singularities underwhelm?