Avoiding the slippery slope to AI Autopilot
AI Autopilot

Avoiding the slippery slope to AI Autopilot

We are at a fork in the road. One path treats artificial intelligence as an amplifier of human intellect, a collaborator that extends what we can do while keeping us in charge. The other treats AI as autopilot, a shortcut that slowly erodes the skills and authority that come from real understanding. The choice will determine whether we become more curious and capable, or more passive and dependent.

Recent research on human–AI agency reinforces that this fork in the road is real. A recent contribution in Information & Organization (2025) from Sebastian Krakowski of the Stockholm School of Economics shows that generative AI marks a qualitative shift in how organizations function, with the potential either to automate tasks or to augment and expand human creativity, learning, and decision-making. His work argues that the way we distribute agency between humans and AI systems, through design and policy choices, ultimately determines the organizational, societal, and ethical outcomes.

Readers who want to go deeper can read a summary, check the source paper, or chat with "Human-AI agency in the age of generative AI," (Information & Organization, 2025), Sebastian Krakowski.

The Case for Amplification

Modern systems can draft emails, summarize reports, or outline strategies in seconds. Offloading this groundwork should raise our ceiling, not lower our engagement. When framed as a teammate, AI acts as a combined research analyst and memory aid: it widens our aperture while leaving direction, interpretation, and judgment with us. The goal isn’t effortless cognition, it’s richer insight.

Yann LeCun, Meta’s chief AI scientist and a Turing Award winner, describes the ideal assistant as a personal staff of specialists under the user’s control. Like the printing press expanded access to knowledge without replacing readers, or how a powerful car enhances the driver rather than sidelining them-AI can expand our reach without taking the wheel. The payoff is empowerment at scale if ownership truly stays with the user.

Douglas Engelbart made the same case long before today’s systems existed. His vision for interactive computing was never about the mouse; it was about augmenting human intellect. Technology, in his view, should help people think, learn, and solve complex problems more effectively. That idea, a “bicycle for the mind”-is even more relevant now. Machines can sift data, retrieve sources, and track details; humans frame the questions, interpret nuance, and make creative leaps. The result should be deeper thinking, not less.

See his 60-year-old manifesto: Douglas Engelbart, “Augmenting Human Intellect: A Conceptual Framework”.


The Slippery Slope to Autopilot

The same capabilities that make AI a powerful partner can also turn it into a cognitive crutch. Historian Yuval Noah Harari warns about the slide from assistance to deference. GPS navigation makes travel easier, yet weakens the habit of forming mental maps. Generative tools can do the same for writing, tempting us to outsource the blank page, and the craft that goes with it.

Treating AI outputs as automatically correct invites passive consumption and dulls the habit of inquiry. Small disciplines, checking assumptions, asking why a recommendation was made, probing alternatives-keep us in the role of thinker‑in‑chief.

Harari’s distinction between technohumanism and Dataism makes the stakes clearer. Technohumanism uses technology to enhance people while preserving judgment. Dataism assumes algorithmic decisions are superior by default. One expands understanding; the other erodes accountability. Augmentation is not just a design preference, it’s a value choice about who retains interpretive authority.

See: Yuval Noah Harari, Homo Deus: After God and Man, Algorithms Will Make the Decisions.


Power, Centralization, and Governance

This tension becomes sharper when viewed through governance. Centralized control of AI is not a theoretical concern. If powerful assistants are concentrated in the wrong hands, they can shape discourse, filter information, and narrow public autonomy.

Geoffrey Hinton, a Turing Award winner and 2024 Nobel laureate in Physics (awarded for foundational discoveries that enable machine learning with artificial neural networks), argues that intelligent assistants could boost productivity across industries and benefit humanity if their gains are broadly shared and their risks carefully managed. But he also warns that the same systems could enable large-scale manipulation or catastrophic loss of control if development outpaces governance.

John Hopfield, 2024 Nobel Prize laureate in Physics (also awarded for foundational discoveries and inventions enabling machine learning with artificial neural networks), and a pioneer of neural network theory, echoes the concern. His contributions show both the promise and the instability that emerge as AI systems become deeply integrated into information ecosystems. The same advances that unlock new capabilities also create new points of influence and vulnerability, especially when deployed at scale.

Their message converges on a single imperative: amplification must preserve human agency, welfare, and freedom from manipulation. That requires systems that keep the user, not the model and not the platform-at the center.

Broad access and personal control counter these risks. The best outcomes come from assistants that operate under individual ownership, much like personal computers did-rather than from systems tightly gatekept by companies or governments.


Design Principles That Keep Humans in Charge

Design either amplifies human judgment or quietly replaces it. These principles keep the balance on the right side:

Article content
Design Principles for AI Assistants

Human Final Authority

AI can propose, simulate, and draft, but any action with real‑world impact needs explicit human approval. Judgment stays with the person.

Legibility and Inspectability

Users must be able to check sources and ask for reasoning, like asking a colleague to walk through a draft. This keeps slow, deliberate thinking engaged.

Bounded Autonomy and Reversibility

Assistants should operate within clear, user defined limits. Consequential actions must be reversible where possible, and users should be able to retrace how recommendations were produced.

Clear Authorship and Accountability

It should always be obvious what the machine suggested and what the human endorsed. Visible authorship keeps accountability where it belongs.

Aligned Goals and Incentives

Assistants need to state what they optimize for, and users must be able to adjust those goals. Systems tuned to engagement or ad revenue will inevitably pull against agency.


A Concrete Pattern: Personalized, User‑Owned Assistants

These principles point toward AI built around personal ownership. One practical model is an assistant grounded in a digital twin of your knowledge, a personal knowledge graph of your work, notes, and prior ideas. With that context, the system can surface relevant information, highlight connections you might miss, and reduce routine cognitive load.

Personal models offer clear benefits: more relevant retrieval, better alignment with your goals, and less exposure to external agendas. But they carry risks too, including reinforcing blind spots or mishandling sensitive data. Ownership doesn’t eliminate these issues, but it aligns the system’s behavior more directly with the individual.

The decisive design features are portability, auditability, and friction for override. Users should be able to export or reset their knowledge graph, inspect the elements that shaped a recommendation, and override or annotate outputs that conflict with their values. That turns the assistant into a transparent extension of judgment rather than an opaque authority.


A Near‑Term Playbook for Human‑Led AI

If amplification is the goal, the near term is where the trajectory gets set. With improved reasoning and richer personal context, assistants can already scan news and research while you sleep and brief you on what changed, why it matters, and what questions you’re not yet asking.

The benchmark for success is simple: Are people making better decisions, asking sharper questions, and learning faster because of these tools? Do they understand the rationale behind a recommendation well enough to challenge it? If not, the system is drifting toward autopilot regardless of how it’s branded.

A credible path forward is within reach: build assistants that are personalized and user‑owned; design interfaces that engage critical thinking rather than bypass it; require transparency about sources and objectives; and align commercial incentives with user agency instead of raw engagement. Machines handle scale and speed. Humans provide direction and meaning. That should be the design and governance constraint for anyone serious about deploying AI in the real world.

To view or add a comment, sign in

More articles by Joao Tapadinhas

  • Thinking Slow in a World of Fast AI

    Complex analysis that once demanded hours of concentration can now begin with a quick query to an AI assistant…

    4 Comments
  • When Thinking Gets Too Easy

    The morning begins with Teams and WhatsApp pings, email threads multiplying by the minute, and a dashboard of metrics…

Others also viewed

Explore content categories