Leading with Hybrid Intelligence: Make AI Work for People, Not the Other Way Around

Leading with Hybrid Intelligence: Make AI Work for People, Not the Other Way Around

The AI gold rush is on. But despite the explosion of tools, models, and pilots, most companies are still stuck at the starting gate – chasing automation dreams without building trust, purpose, or adoption. The problem isn’t the technology. It’s the leadership.

The multi-million dollar "Athena" dashboard was supposed to revolutionize sales forecasting. A year later, it was a ghost town. Managers were still quietly using their old spreadsheets, and a massive investment was gathering digital dust.

In my experience, this story isn't an outlier.

And it's not a story about bad code; it's a story about failed trust.

This piece is about making AI work where it matters most: with people, not just as users, but as co-creators of intelligent systems.

To do that, we must embrace Hybrid Intelligence – a practical, human-centered approach that amplifies our natural strengths with algorithmic power.

We will explore what this really means, how to build the critical skill of "double literacy," and how to lead this shift from the top.

Because if your AI strategy doesn't start with people, it's already behind.


Focus Your AI Journey on Hybrid Intelligence

Article content

Most enterprise AI projects don't fail because the models are bad. They fail because no one uses them.

Smart dashboards sit unopened and forecasts get overruled by gut feel. We tell ourselves it's a maturity issue, but the reality is simpler: the system was never designed for people in the first place.

This is where Hybrid Intelligence comes in. At its core, it’s the idea that humans and AI should work together, each doing what they do best.

AI handles speed, scale, and complex data analysis; humans bring context, judgment, and ethical values. The magic happens when this collaboration is intentional.

Consider a global supply chain control tower. I saw a version of this in action with a client. The AI flagged an impending port strike in Antwerp, instantly modeling three rerouting options through Rotterdam, Hamburg, and Zeebrugge.

But it was the human planner who knew that rerouting through Rotterdam, while fastest on paper, would violate a key client's strict sustainability sourcing policy – a nuance the algorithm completely missed. The planner rerouted through Hamburg, averting a crisis the AI would have created.

AI didn't decide. It equipped a human to decide better and faster. This is the critical difference between automation and augmentation. Remember this mantra:

Automation replaces a task; augmentation enhances a judgment.

Companies like KLM, Morgan Stanley, and Walmart are building systems on this logic, where AI proposes and humans make the final decision. The real benefit isn't just speed; it's the quality of the final decision. Done right, Hybrid Intelligence produces better outcomes than either AI or humans could achieve alone.

But it only works if you start with a clear business problem instead of chasing capabilities. Without that clarity, you're just automating ambiguity.

Use case clarity gives you the blueprint for success and separates the winners from the “wannabes”. It helps you define:

  • Where AI genuinely adds value and where it doesn’t.
  • What the human role is at each step.
  • What risks or ethics demand human oversight.
  • How to measure improvement beyond technical performance, focusing on better outcomes.

Without that clarity, you’re just automating ambiguity.

That’s why Hybrid Intelligence is the right place to start. It’s grounded. It’s testable. And most importantly, it’s human.


Double Literacy—The Skill That Powers Collaboration

Hybrid Intelligence runs on trust, and trust depends on a skill most companies overlook: double literacy.

This isn’t just technical fluency. It’s the ability to be fluent in two languages:

  • Human Judgment: The world of values, ethics, context, and emotional intelligence.
  • Machine Logic: The world of algorithms, probabilities, system limitations, and data dependencies.

It’s the difference between seeing an AI recommendation and knowing what to do with it.

Too often, we assume people will instinctively know when to challenge a forecast or flag a bias. But without a culture that empowers them to intervene, they will default to either blind acceptance or total rejection.

Neither builds value.

Article content

This goes beyond simple training; it's about embedding new habits and fostering a culture where insights are valued as more than gut feel.

True double literacy starts by helping employees translate what AI is saying into something actionable, not just in technical terms, but in operational and emotional ones.

Take retail. AI forecasts a shift in demand for winter coats. Merchandising can’t just accept that blindly – they need to understand the why, assess upstream supplier constraints, and factor in local consumer behavior. The AI doesn’t make the decision. It informs it. But only if the human knows how to interpret and respond.

Or healthcare: an AI model suggests a probable diagnosis. A physician uses that insight in the context of the patient’s history, their own expertise, and ethical duty of care. Without double literacy, this becomes a dangerous game of guesswork or overreliance.

Stop and reflect: Are you the kind of person who always trusts the dashboard? When was the last time you questioned an algorithm or offered context it missed? Does your workplace value human insight, or only machine results? If this makes you pause, it may be time to invest in double literacy – for your own sake and your team’s.

Start by turning this concept into a tool with a quick Double Literacy Check-In. Ask your teams:

  1. When was the last time we successfully challenged and overruled an AI recommendation? What was the outcome?
  2. Can you explain the "why" behind the last major AI-driven forecast you received? What were the key data points and assumptions the model used?
  3. Where is the "break glass" point? Do you know the specific conditions under which you should distrust the system's output?

If your team can't answer these questions, you aren't building trust—you're building compliance. And compliance isn't collaboration. If your culture rewards only compliance, you’re not building trust – you’re stifling innovation.


Leading the Change with the A-Frame Model

An AI strategy that sounds great in a slide deck often falls apart in the real world because ownership over the new way of making decisions is never defined. AI reshapes human roles, and leadership’s job is to provide the structure for this new partnership.

To guide this integration, we can use the A-Frame Model, adapted from the work of Dr. Cornelia Walther on AI adoption. It forces leaders to answer five critical questions. I’ve added a "Leadership Litmus Test" to each to make it more pointed.

Awareness: Do teams know how AI influences their work?

  • Litmus Test: Can your team members explain to a new hire how AI works with them, not just for them?

Alignment: Are your AI efforts solving the right business problems?

Litmus Test: If you polled the end-users, would they say the AI tool solves one of their top three most frustrating problems?

Appreciation: Do you value human insight as much as model accuracy?

  • Litmus Test: How do you reward an employee who correctly proves the AI is wrong?

Acceptance: Have you positioned AI as a tool for talent, not a threat to it?

  • Litmus Test: Is your "AI training" budget focused on building judgment and context, or just on teaching people which buttons to click?

Accountability: Are roles clear for when things break or when a decision truly matters?

  • Litmus Test: Who is the single person accountable when a hybrid human-AI decision leads to a negative outcome? If you can't name them, no one is.

Answering these questions shifts you from just launching tech to designing a new operating model. It also requires new metrics. Move beyond speed and efficiency and start measuring:

  • The rate at which AI outputs are actually used in decisions.
  • Whether trust in the system improves over time, and gains in human trust and understanding.
  • If human-AI workflows outperform either side alone, and evidence of humans and machines refining decisions together.

If you aren't tracking real engagement – does the system help people do better work? – you're not scaling intelligence. You're scaling noise.

Next Monday morning, try this: Pick one current project or process. Map its workflow to the A-Frame. Who owns each step? Where can human input improve – not just validate – the machine? Where is trust weakest? Address one of those bottlenecks this week.


Conclusion: The Real Work Begins Now

Hybrid Intelligence is the future of decision-making, but it doesn't happen by accident. It happens when leaders design for it – intentionally and with people at the absolute center.

This isn’t about flashy demos or elaborate dashboards; it’s about the quiet, hard work of building trust. So let me leave you with a prediction:

In the next 24 months, the market leaders will not be the companies with the most advanced algorithms. They will be the ones who have mastered the art of the human-AI partnership.

AI won't transform your business if your people don't trust it. And if your team isn't ready, they're not the ones falling behind – you are.

Article content

IKEA has one of the most fascinating stories about Human-Machine partnerships. They focused on ethics and people first, then developed their AI framework. They relegated simple tasks to their ChatBot and reskilled 8500 call center agents into remote interior design consultants. They’ve improved customer satisfaction and added a new revenue stream!

Final challenge for you (and your team): Monday morning, pick one task where AI and human judgment intersect. Use the A-Frame as a lens. What’s missing? Make one improvement – however small – and see what changes.

Because AI won't transform your business if your people don't trust it or know how to use it. What are you measuring on your AI journey – what matters, or just what’s easy?

Sources

  1. Dellermann (2019): Hybrid Intelligence
  2. Partnership on AI (2019) – Human-AI Collaboration Guidelines
  3. Shneiderman (2020) – Human-Centered AI
  4. MIT Sloan (2023) – The Human + AI Equation
  5. HBR (2025): Agentic AI Is Already Changing the Workforce
  6. Walther (2025) – Why Hybrid Intelligence Is the Future of Human-AI Collaboration

You’re absolutely right! AI investments fail not because the technology isn’t capable, but because the human element is overlooked. Building trust between AI and teams is essential for adoption.

Thanks for sharing, Michael, and thank you for arranging Sheri to do a seminar for us back then about 5 years ago!

To view or add a comment, sign in

Others also viewed

Explore content categories