People First in the Age of AI – The Real Misalignment Between AI Investments and What Workers Actually Need
Series 2/3
Welcome Back: Series 2 Overview
This is the second post in our 3-part series unpacking groundbreaking Stanford research on how U.S. workers view the rise of AI agents in the workplace.
In series 1/3. we explored what AI agents really are (not just chatbots but autonomous task-doers) and what workers actually want AI to do.
The insight?
Workers aren’t anti-A: They just want AI that helps, not replaces them.
Tasks that are tedious, repetitive, or mentally draining are ideal for automation. But tasks that require creativity, empathy, or nuanced decision-making should remain human-led.
This blog focuses on the mismatch: AI is being heavily deployed where workers don’t want it, and underused where they need support most.
In Series 3, we’ll pull it all together by focusing on what this shift means for the future of human skills—and how organisations should rethink workforce development and leadership in an AI-augmented era.
Series 1 Recap in One Sentence:
Workers are asking for AI to help them do less of the soul-draining tasks and more of the meaningful work—but too often, AI is being pointed in the wrong direction.
The Key Discovery: The Desire-Capability Mismatch
The Stanford researchers built a framework called the Desire–Capability Landscape, comparing:
They divided tasks into four zones:
Now here's the shocker:
41% of AI startups and investments (e.g., Y Combinator companies) are focused on the Red Light and Low Priority zones—not where workers are calling for support.
AI Is Targeting the Wrong Problems
Let’s be clear: most AI development isn’t evil or careless. But it’s often tech-first, not human-first.
Many AI solutions are focused on:
Yet workers are saying:
"We don’t want AI doing that. We want help processing payroll errors, organizing compliance data, scheduling meetings, or handling repetitive reporting."
These tasks live in the Green Light Zone—they’re high volume, low human meaning, and AI is ready to help.
But instead, AI is going after high-agency tasks like:
These tasks fall into the Red Light Zone—because they touch deeply human areas like voice, tone, trust, empathy, or team dynamics.
So why is this happening?
Because startups and investors often chase novelty, not necessity.
But for companies that value people-first culture, this is a dangerous disconnect. It risks damaging trust, morale, and the employee experience.
Human Edge Perspective: Align AI with Human-Centered Workflows
At Human Edge Collective, we believe in building AI systems and strategies that:
This means changing how we frame AI strategy:
Don’t ask: “Can this task be automated?” Ask: “Do our people want this automated—and how would it improve their work experience?”
What Companies Should Do Differently
Here’s how you can apply this insight inside your organisation:
1. Map Your Workflows by Human Value
Break down key roles into specific tasks. Then rate each on:
Target automation only where all three align.
2. Involve Employees Early
Use surveys, voice interviews, or small pilots to understand:
Stanford’s study used audio-enhanced interviews to collect honest, nuanced worker feedback. This allowed people to express emotion, not just checkbox responses.
You can do the same—just ask open questions like:
3. Avoid Overreach in the Red Zones
If a task involves:
…you’re in the Red Light Zone. Avoid automating too aggressively here. Instead, build AI copilots - assistive tools that help but don’t take over.
A Word on the Human Agency Scale (HAS)
Another powerful insight from the study was the Human Agency Scale (H1–H5)—a spectrum of how much human involvement is needed for any given task.
HAS Level Description H1 AI can handle the task entirely on its own H3 Human and AI work together in equal partnership H5 AI cannot function without full human involvement
🔎Workers overwhelmingly preferred H3: Equal Partnership.
That’s a sweet spot where people feel empowered, AI adds value, and tasks get done faster and better.
Let this be your blueprint for designing AI at work—not as a replacement engine, but as a collaborative layer.
Quick Summary from Series 1 & 2
In Series 1, we learnt that workers welcome AI—but only when it supports them, not when it replaces their judgement, voice, or creativity.
In this Series 2, we revealed that most AI investments are misaligned with these needs , focusing on automating areas where humans still want control, while ignoring the tasks they actually want help with.
What’s Coming in Series 3
In the final post of this series, we’ll explore:
How AI is reshaping the definition of valuable human skills
📉 Why technical knowledge is no longer enough
📈 Which interpersonal, emotional, and organizational skills are now rising in strategic importance
And most importantly, we’ll explore how Human Edge Collective helps organisations develop talent, culture, and strategy to thrive in this Human+AI world.
Final Thought: Build with Empathy, Not Just Efficiency
You don’t need to be a tech company to win with AI.
You just need to be a people-first company that listens to its workforce, respects its values, and builds smart tools that elevate—not diminish—what it means to be human at work.
If you’re ready to audit your internal workflows, build your human–AI roadmap, or redesign your talent strategy—we’re here to help.
Let’s make AI work for people. Not the other way around.
#HumanEdgeCollective #PeopleFirst #AIAndHumans #HumanCenteredAI #FutureOfWork #HumanPlusAI #ResponsibleAI #DigitalTransformation #WorkforceLeadership #EmpathyDrivenWorkplace #HRInnovation #AIInTheWorkplace #AIInvestmentStrategy