There is no "I" in AI: Why humans must manage matters of consequence
We live with almost daily speculation about AI crossing the threshold of artificial general intelligence (AGI) and replacing the human workforce. Dario Amodei, CEO of leading AI research lab Anthropic, recently asserted that the technology could eradicate half of all entry-level white-collar jobs in the next five years. This conjecture may help with PR and fundraising, but it also creates anxiety about AI adoption and leaves executives wondering how to plan effectively for the future.
However, from a more empirical perspective, S&P Global recently reported that 42% of companies had abandoned the majority of their AI initiatives entirely before they went live, and MIT Sloan Management Review notes that even though companies are investing heavily in generative AI, many are still seeking returns on that spending. Even research involving Microsoft noted a “tendency for automation to make easy tasks easier and hard tasks harder,” which is not a ringing endorsement for the merits of co-intelligence.
So, how do we choose between these competing views of AI's current reality?
Sometimes it helps to zoom out, and Aristotle's vantage point over two millennia before the first computer is far enough away from the AI melee to offer some perspective. Using his ideas, I will suggest that the current claims of AGI and human replacement are premature, lack precision, underestimate human distinctiveness, and ignore history, but still allow for a more narrowly defined and graduated revolution in the workplace.
Aristotle’s three intellectual virtues of practical wisdom (phronesis), productive skill (techne), and theoretical discovery (episteme) cover most of what happens in the modern workplace, and I will unpack them in that context.[1]
I begin with phronesis and AI’s almost complete inability to replicate it.
Phronesis
By phronesis, Aristotle means a nuanced form of applied wisdom requiring all the human faculties. Ronald Schleifer describes it as “context-sensitive, ethically grounded decision-making in conditions of uncertainty,” which sounds very like leadership. If AGI is defined against this standard, which integrates all aspects of human intelligence rather than just computational and emergent reasoning, then we are not even close.
Even without getting metaphysical about humans, it is evident that AI lacks some of our basic capabilities, such as emotions, instincts, moral capacity, long-term memory (aka experience), consciousness, and embodiment. Perhaps we should not be surprised, then, that neuromorphic supercomputers like the impressive SpiNNaker at the University of Manchester, which are designed to emulate the human brain, currently simulate only 1% of its neurons. Professor Steve Furber, SpiNNaker lead architect, concedes, "We're still a long way from full brain emulation – but we can learn a lot by modelling parts." The gap will narrow, but it is unlikely to close for decades.
So why does the imminence of AGI surface every few weeks? In addition to cynical announcements for corporate purposes, I think it is driven by generative AI’s ability to harness human language and knowledge patterns to imitate us in the most extraordinary act of ventriloquism in history. This marvel of mimicry is very convincing and makes us feel like we are interacting with an independent entity. But the “I” projected by generative platforms is what Jacques Derrida called “an illusion of presence” or “a hauntology.” It turns out the ghost in the machine is us, an echo of ourselves derived from our training data.[2]
So, if there is no “I” in AI, there can be no genuine phronesis, and that means technical platforms cannot be allowed to lead or decide anything consequential. As such, humans must be more than “in the loop” to monitor outputs for hallucinations and bias; we need to own, define, and govern the loop for the foreseeable future. This has immediate implications for the rise of multi-agent systems capable of working autonomously. Humans must assert control over these developments to ensure they operate transparently and with manual overrides.
Techne
So, is the AI revolution just hype and trickery? Not once we move into Aristotle’s domain of techne, which is the world of productive skills, know-how, and technique. Aristotle’s definition covers anything we create, including an idea, product, analysis, artistic performance, or even a cup of tea. Underrated traditional machine learning can already automate many quantitative tasks, while generative AI, still early in its development, is credibly impactful in content creation, coding, and conversation (the three Cs).
But there are limits. Right now, AI automation works best on processes that take less than an hour. That window is expanding, doubling every seven months, but AI cannot yet displace whole jobs, and won’t be able to for another two or three years. When that threshold is crossed, organisations will start to fundamentally transform from the top down rather than just using AI for selective bottom-up tasks. Until then, it is best to focus on using AI for specific business tasks with a proven track record of economic return.
Another challenge is that developments in robotics are lagging behind those in software (Moravec's paradox), requiring too much energy to be economically or environmentally viable, and unable to replicate generalised fine motor skills. That means many manual tasks will remain predominantly human until some of the big investments underway by companies like Nvidia start to make real advances. That said, we should note that a combination of AI and robotics can already design, build, and drive a car (fairly) safely. This may have taken longer than predicted, but it is a sure sign of things to come.
Even with the increasing automation of workplace skills, AI will not only displace but also create jobs for humans. The internet removed whole industries that were based on information friction, like travel agents, for example, but created millions of web development roles. Those front-end coding skills are looking highly vulnerable to generative AI, but now we have prompt engineering as a completely new profession. The net position is hard to predict, but previous upheavals like the Industrial Revolution and the advent of the internet have created more jobs than they have destroyed. I am hopeful of a repeat.
Episteme
Aristotle’s final domain of episteme is a narrow but interesting one. It is about universal theories that can be discovered but not invented; think mathematics, science, and logic. AI has made big breakthroughs here recently, with Google DeepMind announcing AlphaEvolve in May, an “agent designed for general-purpose algorithm discovery and optimisation”, and IBM’s AI-Hilbert, which has been verifying experimental data against scientific theories and even rediscovering established laws since the middle of last year.
So far, these powerful platforms have not made a fundamental discovery of their own. Nonetheless, I do not see any theoretical reason why generative AI platforms like these could not eventually do so, especially as exponential increases in speed and capacity emerge from the progress finally being made in quantum computing. For now, though, AI is a super-tool to aid but not originate scientific breakthroughs.
Conclusions
So, perhaps, to adapt Mark Twain's often quoted words, “Rumours of our imminent replacement are greatly exaggerated.”
We are certainly at the beginning of an AI automation revolution that will assume many work tasks and eventually redefine organisations. This will be gradual but ultimately transformational. Right now, leaders should focus on solving specific business problems using machine learning and generative AI for the three Cs of content, coding, and conversation. Some labour can be saved, but new skills will also be needed. For most companies, being a fast follower is enough, and executives should require evidence of proven benefits elsewhere before they invest.
However, no AI research in sight suggests that we will replace humans as responsible, relational, self-reflective, and embodied leaders who have the benefit of at least 100 millennia of experience in navigating new disruptions. This aspect of work is ours for the long term.
Which brings us back to Aristotle and his preoccupation with eudaimonia, or human flourishing. Since humans remain inimitable and essential to the workplace of the future, we would do well to ensure that all apparent ‘progress’ is to the unambiguous benefit of our species.
For more content delivered directly to your inbox, subscribe to my newsletter on Substack.
[1] Nicomachean Ethics, Book 6.
[2] I will cover this idea in a subsequent article about the rise of AI therapy, companions, and eventually humanoid support robots.
Group Logistics Director | Transformation Director | FMCG & Retail Expert | Board-Ready | Key Note Speaker | Advisory for C-Suite | S&OP | Automation | Global Ops
3wWell said Rob. Too much of the AGI debate feels like philosophy masquerading as strategy. The question isn’t when machines will replace us, it’s where they can augment us with measurable value. Cognitive inflation, hallucinations, ethical opacity, AGI’s still grappling with basic trust and control. Meanwhile, real gains are hiding in plain sight: supply chain forecasting, anomaly detection, customer service triage. Let’s solve the hard, boring problems first. The existential ones can wait.
Solution Architect for Data and AI, Technology Scout - Views are my own
3wInteresting article, thank you. Physicist and Nobel Laureate Sir Roger Penrose has a clear stance on this topic and it was some kind of an eyer opener for me when I listened it for the first time, also in relation with AGI. "Not intelligent at all", even if coming from a different angle. I send you this conversation, the statement itself I'm refering to can be found between min 20 and 21, you maybe find it interesting in relation with your blog here . https://youtu.be/6l9Vr5I_MvI?si=Pq54OWvoMgZuKrbL
CEO transforming technology-driven companies
3wWell said. The fixation on AGI can distract from what today’s AI truly is—powerful, but narrow. As others have pointed out, these systems are not intelligent in any human sense. They generate fluent outputs without understanding, often creating the illusion of insight where there is none. That’s why the real opportunity lies in applying AI to specific, well-defined problems where pattern recognition at scale adds value. It’s also why we need to stay grounded in human judgment, not just to steer these tools, but to resist the risk of outsourcing thinking to systems built on mimicry, not meaning. Framing today’s AI capabilities through Aristotle’s virtues is a useful reminder that knowledge, wisdom, and insight remain distinctly human strengths.
Architecting Intelligent Futures | Bridging AI/ML & Blockchain, web 3.0 to Build Trust-Centric Ecosystems
4wBrilliantly framed, Rob. AI is impressive mimicry, but it lacks meaning.
CEO and Co-Founder Yordex
4wHi Rob, I fully agree AGI is not around the corner. I actually think AI is stuck and that is a problem. I started an AI research project at Carnegie Mellon University almost 30 years ago. That summer two fellow CMU researchers took a self-driving car 2,850 miles from Pittsburgh to San Diego and the car drove itself 98% of the time. We still did not manage to complete that last 2% in 30 years. I recently wrote a few articles what went wrong, why that will affect all other agentic AIs and why that is a problem. For those interested, the articles are here: https://medium.com/@erik.dekroon/why-your-car-is-not-driving-itself-yet-1d828070af99