We Can't Slow AI Down, So We'd Better Get Smarter
How the Prisoner's Dilemma Is Accelerating Us Toward a Future We're Not Ready For
TL;DR: The AI Race Is a Prisoner's Dilemma, And We're All Playing
• Everyone agrees we should slow AI down. No one wants to be the first to do it
• Entry-level jobs aren't evolving, they're vanishing: 5.8% unemployment for recent grads
• We're not just losing jobs, we're losing the path to mastery, meaning, and mobility
• The real choice: AI as a bicycle (amplifies us) vs. elevator (replaces us)
• A handful of tech giants now control the cognitive infrastructure of our economy
• Education is failing: $1.7T in debt for jobs AI is already absorbing
• This future is not inevitable but it must be shaped intentionally
Before we begin: The opinions shared here are entirely my own, and yes, some are intentionally provocative. Sometimes it takes a jolt (even a slightly exaggerated one) to interrupt passive optimism or runaway acceleration.
In an age where fear of falling behind and a deep erosion of trust between people, companies, and nations drives reckless speed, this isn't alarmism. It's a call for intentionality, reflection, and smarter choices.
Maria graduated with honors in computer science, expecting to join a company, learn on the job, and climb the career ladder. Instead, she found a job market that had fundamentally shifted. Companies weren't hiring entry-level engineers. They were deploying AI agents overseen by smaller teams of senior developers.
After applying to 74 jobs in six weeks, the pattern became clear: most entry-level roles had either disappeared or been redefined into AI-augmented workflows requiring years of experience she didn't have.
This isn't an isolated story. It's the new reality, and it perfectly illustrates the impossible choice we face as a society.
The Runaway Train
Yuval Noah Harari (historian & author of Sapiens) captures our dilemma perfectly: "We know it's risky, for sure. We understand that it would be wiser to go slower and invest in safety. But we cannot trust our human competitors. If others move faster, they'll win. So we have no choice but to accelerate."
This is the prisoner's dilemma at global scale. Everyone, including CEOs, heads of state, university presidents, knows that slowing down would be safer. More guardrails, more oversight, more time to adapt. But no one wants to be first to pause, because falling behind in AI means falling behind in everything: defense, commerce, scientific discovery, economic competitiveness.
So we accelerate not because it's wise, but because it feels less dangerous than the alternative.
Even Demis Hassabis, co-founder of DeepMind, warns that AI will be "10 times bigger than the Industrial Revolution and maybe 10 times faster." If the Industrial Revolution remade society over a century, AI threatens to do so in a decade. Our institutions (political, educational, economic) aren't built to adapt at that speed.
Yet Hassabis himself now calls for an "IAEA for AI". A global oversight like the International Atomic Energy Agency, which has kept nuclear technology from spiraling out of control through shared safety standards, mandatory inspections, and binding agreements between nations. For AI, this could mean coordinated guardrails on dangerous capabilities, shared safety research, and agreed-upon limits on autonomous weapons or mass surveillance systems.
From Tools to Autonomous Agents
We've crossed a critical threshold that changes everything: AI no longer just obeys. It decides.
Traditional tools wait for instructions. Modern AI sets goals, plans multi-step tasks, and adapts based on feedback. OpenAI's Operator understands user intent beyond literal commands. When you say "plan my vacation," it doesn't just search for flights: it interprets your preferences, budget constraints, and schedule to orchestrate an entire travel experience. Google's Gemini resolves support tickets by analyzing context and dynamically choosing solutions, learning from each interaction to improve its approach. Perplexity's Comet browser integrates AI so deeply it can interpret high-level objectives like "schedule a meeting with the team" and execute complex workflows like browsing calendars, sending emails, and coordinating responses, all from understanding your goal rather than following rigid commands.
These systems share a revolutionary capability. They interpret intent and dynamically choose how to fulfill tasks, sometimes achieving outcomes that exceed their original programming. They reason across multiple steps, simulate actions, and validate approaches before executing, demonstrating a form of digital intuition that mirrors human problem-solving.
This shift from passive assistant to proactive actor transforms our relationship with intelligence itself. We're creating systems that think differently than humans, yet we increasingly rely on them for critical decisions. That growing dependence on systems we barely understand may be our greatest vulnerability.
In this new landscape, the smartest AI won't necessarily win. Users don't always want the most capable agent. They want the one they trust with their mortgage, their kids, their secrets. Trust becomes the last defensible advantage in a world of commoditized intelligence.
As I argued in a previous piece, "Forget the Model Race," the future of AI won't be won by bigger models. It will be won by the one that creates the most trusted, seamless, human-centered product experience.
The Vanishing Career Ladder
Nearly 400 tech companies announced layoffs affecting 94,000 employees thus far in 2025, with many explicitly citing AI-driven efficiencies. But this isn't just about layoffs. It's about entire categories of work disappearing.
Companies are eliminating junior roles entirely, favoring small teams of experienced workers who manage AI tools. Microsoft projects that 95% of code will be AI-generated within five years. Dario Amodei, CEO of Anthropic, warns that "AI could wipe out half of all entry-level white-collar jobs" in the next 1-5 years, pushing unemployment to 10-20%. He says we must stop "sugar-coating" what's coming because "most people are unaware this is about to happen."
Vinod Khosla, the legendary venture capitalist, puts an even finer point on the timeline. "There isn't a job where AI won't be able to do 80% of 80% of all jobs within 3 to 5 years." His advice? "Optimize your career for flexibility, not a single profession… AI will automate narrow specialist tasks better than you."
We used to say "AI won't take your job: someone using AI will." Now we're entering Phase 2: the job itself might disappear. The function gets absorbed into automated workflows.
This creates more than just a gap, it's a dangerous feedback loop. Without entry-level positions, how do workers gain experience for senior roles? We're not just losing jobs. We're losing the journey itself.
The most vulnerable roles are precisely those once protected by higher education:
• Language-based work: Writers, editors, paralegals, journalists
• Analytical roles: Financial analysts, accountants, budget planners
• Creative but structured jobs: Graphic designers, UX/UI specialists
• Programming and support: Entry-level developers, QA testers
Meanwhile, jobs requiring physical skill, real-world context, or emotional nuance (plumbers, nurses, welders, eldercare workers) remain resistant to automation.
We're witnessing the "Revenge of the Blue-Collar Class." The middle is hollowing out, leaving a barbell economy of senior roles managing AI and human-only jobs in trades and care.
The New Power Concentration
We're seeing the fastest consolidation of economic power in modern history. A handful of companies now control the cognitive infrastructure powering everything from content creation to financial analysis. The winners fall into six interlocking categories:
• Foundation model creators: Own the underlying intelligence (OpenAI, Anthropic, Google)
• Platform giants: Control distribution and user ecosystems (Apple, Microsoft, Amazon)
• Hardware providers: Supply the chips and cloud infrastructure (Nvidia, AMD, TSMC)
• Workflow integrators: Embed AI into business software (Salesforce, Adobe, ServiceNow)
• Interface innovators: Build trusted user experiences (Perplexity, Figma, Notion)
• Data owners: Hold the behavioral context AI needs (Visa, telcos, Stripe)
This isn't just market concentration. It's the emergence of cognitive gatekeepers. From model to interface to memory, a few firms increasingly own the entire AI stack.
This concentration creates a paradox: the same companies building systems that eliminate jobs still need consumers who can afford their services. AI doesn't just replace human effort; it decouples productivity from employment. A single model can generate billions in value with minimal human labor, captured by a few shareholders. But those owners still need consumers. If millions lose jobs or meaningful income, demand collapses.
The math is simple: if people can't work, they can't earn. If they can't earn, they can't spend. No spending equals no economy. Eventually, AI-generated value must be redistributed not as charity, but as system maintenance.
This means the AI winners must become the funders of this redistribution. Here's the inevitable projection: if AI replaces jobs at scale (whether over five years or thirty), who will have income to buy the AI subscriptions and agents that Big Tech is building?
That's why many economists are revisiting Universal Basic Income. Could it be a solution, perhaps funded directly or indirectly by the AI winners themselves? Not as idealism, but as what one economist calls a "loyalty program for capitalism itself": a systemic adjustment to keep demand alive in a post-labor economy.
While power concentrates at the top, something subtler happens below. We're outsourcing our thinking.
The Cognitive Fitness Crisis
During the Industrial Revolution, machines automated physical labor. Over time, society compensated by inventing exercise, gyms and sports, artificial movement to replace lost effort.
Now we're doing the same to our minds. AI automates mental work:
• We don't memorize: search handles that
• We don't write: LLMs draft for us
• We don't navigate: algorithms choose our route, our playlist, what the kids watch
• We don't decide or reflect: AI nudges us faster than we can pause to think.
Just as our bodies weakened without physical effort, our minds risk cognitive atrophy. We're losing number sense to calculators, spatial memory to GPS, and soon, emotional intelligence to chatbots.
For us, AI feels like a choice. For our children, it may be their first environment. They might outsource foundational thinking from the start, never building the cognitive muscles we take for granted.
Columbia psychologist Betsy Sparrow's research revealed the "Google Effect". People remember less when they know information is stored externally. Our brains optimize for retrieval, not retention. But what we store in memory shapes how we think. It's the raw material for intuition, creativity, and insight.
This is why we need "mental fitness," deliberately choosing the hard way, not because it's efficient, but because it builds capability:
• Writing your first draft without help: so you remember how to struggle through a blank page
• Navigating without GPS: fMRI scans show frequent GPS users have reduced hippocampus activity, while London taxi drivers who memorize 25,000 streets show enlarged memory centers
• Writing by hand, not typing: multiple studies confirm handwriting boosts learning and idea formation in ways typing doesn't
• Do math in your head before reaching for a calculator: mental arithmetic engages working memory and numerical fluency like lifting weights for your brain
• Argue both sides of a topic before searching for "the answer": internal debate sharpens thinking more than finding consensus does
• Reading slowly, thinking without shortcuts: so your intuition and creativity don't rust
It's about choosing thought even when machines can do it for you. In a world where thinking becomes optional, thinking on purpose becomes a superpower.
The Bicycle vs. Elevator Choice
This brings us to the central metaphor: AI as a bicycle versus elevator.
A bicycle amplifies your effort while keeping you strong. You pedal, steer, balance and the machine multiplies your capability but you remain essential. An elevator removes effort entirely. You press a button and get carried to your destination, but your muscles atrophy from disuse.
The choice isn't about speed. It's about who remains in control.
At T-Mobile, we see this principle in action through our own AI platforms built with OpenAI. AI handles routine tasks in the background, freeing our experts to focus on complex problems requiring human empathy and judgment. AI doesn't replace our people; it amplifies them.
This is the bicycle principle. Technology that makes humans more capable, not less necessary.
The Education Reckoning
The old promise of education (go to college, get a safe job, earn a good living) is cracking in real time. Universities sold a generation $1.7 trillion in debt for jobs that no longer exist.
The cruel irony? Computer science departments were literally building the tools that would automate their graduates' jobs, yet most failed to prepare students for this reality. Business, journalism, and liberal arts students graduated with no understanding of how AI would reshape their fields.
As Scott Galloway (NYU professor) puts it, "You can't charge $80,000 a year to teach what YouTube covers for free."
We're witnessing the end of the Knowledge Economy. For decades, education rewarded retention. Memorizing frameworks, mastering facts, reciting answers. But when machines remember everything and synthesize instantly, knowing things is no longer a differentiator.
The future won't reward what you can recite, but what you can navigate. Not how much you store, but how well you collaborate with machines that store it all. Knowledge is no longer power, the ability to direct knowledge is.
Education must prepare us for:
• Judgment over recall: deciding in ambiguity, not repeating facts
• AI fluency: collaborating with AI across disciplines
• Emotional intelligence: building trust, resolving conflict
• Entrepreneurial thinking: creating value in emerging systems
• Reinvention as core skill: adapting to careers without names yet
We're not competing with AI. We're earning our place beside it.
The One Decision AI Can't Make for Us
Here's the uncomfortable truth. AI will not ask our permission to evolve. But its path isn't predetermined. It's shaped by our choices, incentives, and governance.
We must collectively decide:
• What remains off-limits to automation? Should AI make hiring decisions, diagnose mental health conditions, or determine prison sentences? Which human roles are too important to delegate?
• How should AI-generated value be shared? When one AI system replaces hundreds of workers, who captures that productivity gain? How do we prevent winner-take-all outcomes?
• Who gets protected during this transition? Do we prioritize retraining programs, universal basic income, or job guarantees? What about communities built around industries that disappear overnight?
• What does progress actually mean? Is it pure efficiency and GDP growth, or does it include human dignity, meaningful work, and shared prosperity?
If we don't make these choices deliberately, market forces and geopolitical competition will make them for us, optimizing for metrics we may not value.
This Is Still Ours to Shape
We've adapted to massive technological shifts before. The plow, printing press, steam engine, assembly line, personal computer. Each disrupted existing systems but ultimately expanded human capability. AI can do the same, but only if we approach it intentionally.
This requires coordination across governments, industries, and cultures. Not to slow down from fear, but to move deliberately toward the future we want.
The window for influence is closing, but it hasn't closed. We can still choose the bicycle over the elevator. We can still insist that being human becomes an intentional choice, not an accident of what machines can't yet do.
The future belongs not to those who move fastest, but to those most thoughtful about what should remain human.
How We Stay Human
The future won't be decided by a summit or a system update. It will be shaped by the small, human choices we make, especially when no one's watching.
Personally:
Practice cognitive resistance training. Do things the hard way. On purpose.
Write the first draft without autocomplete: so you remember how to struggle through a blank page.
This article was written that way. I drafted it solo, then used AI to polish and cut the clutter.
Solve the puzzle before searching so your brain doesn't forget how to wrestle with uncertainty.
Even if you Google it later, give your mind a chance to stretch first.
Teach someone a concept so you discover what you really understand.
Explaining from memory forces you to organize your thinking and reveals where the gaps are.
Not because tech is bad, but because our minds atrophy without friction.
Professionally:
Ask better questions, not just generate better answers.
Because in an age of infinite answers, the value shifts upstream: to curiosity, framing, and discernment.
Design tools that amplify human judgment, not erase it.
The goal isn't full automation. It's thoughtful augmentation. The best tools make people wiser, not obsolete.
Mentor early-career talent, to preserve the rungs we once climbed.
If AI wipes out entry-level roles, human leaders must rebuild the ladder through time, guidance, and trust.
Collectively:
Build guardrails before it's too late.
We didn't wait for seat belts after the crash. We shouldn't wait for social collapse before addressing the risks of unchecked AI. Guardrails signal maturity, not hesitation.
Support policies that ensure AI serves people.
Even AI company CEOs are calling for regulation. That's not charity, it's survival instinct. Regulation must protect workers, preserve agency, and align incentives beyond profit. Otherwise, the market will move faster than society can absorb.
Demand education systems prepare students to work with AI, not compete against it.
We need curricula that focus on judgment, adaptability, collaboration, and the irreplaceably human.
Ask the one question no model can answer for us: What parts of being human are non-negotiable?
We lost our sense of direction to GPS and we never got it back.
Let's not lose our ability to think, decide, connect, and dream.
Because in a world where everything can be automated, staying human is no longer automatic.
It's a choice.
And the future will belong to those who make it. Deliberately, imperfectly, and together.
Your turn:
What's one thing you still do the hard way and on purpose?
What's one skill you're intentionally keeping human in your work?
👇 Share in the comments. I'd love to hear how you're navigating this shift.
Global Executive | Board Member |Building Trust | Nurturing Talent | Delivering Results | Innovating | Listening, Learning & Leading |
3dOmar TAZI - a #thoughtfulAugmentation
Creative Director @ Red Circle Media LLC 🎬 | Viral Content Architect | Telecom & Tech Campaign Specialist | Drone-Driven AI Storytelling | Cinematic Branding That Converts | #RISE_ENT #707DroneGuy
4dRespect. Best regards, John “The IrishMan” Ireland 🎬 Creative Director | Red Circle Media LLC #Respect #Director #Media
Omar - nice take on where AI is now and where it is taking us next. Well said on the Prisoner’s Dilemma framing. Everyone knows we should slow down, but no one wants to be first. I especially felt the part about losing the career ladder. Without entry-level roles, we’re not just losing jobs, we’re losing how people grow. And while many blue collar jobs feel safe for now, AI is creeping in there too. I see it first hand in my field where 3D building construction, autonomous excavation, predictive maintenance, and AI-assisted inspections are already reshaping how we build. Appreciate you sparking this conversation. It is one we all need to be having personally and professionally.
Passion for Business & Technology | Executive Vice President - Global Delivery Centers | Non-Exec Chairman of the Board Capgemini Spain
6dReally interesting point of view!! Thanks for sharing Omar TAZI