Language Shapes Power at Work
image by l'Eretico

Language Shapes Power at Work

AI no longer just powers our tools. It shapes how we think, how we lead, and how we assign value to human life inside organizations.

As AI becomes embedded in everyday business decisions, leaders face a deeper risk: adopting machines' metaphors to describe people, relationships, and power. This shift doesn't begin with technology policy. It begins with language.

When you call employees "systems," design AI that simulates empathy, or collect feedback you don't intend to act on, you are not just streamlining operations, you are redefining leadership itself. You are shifting from stewarding human potential to managing predictable inputs. Over time, these subtle changes recode what it means to be human at work.

Why This Matters Now

Meghan O'Gieblyn's God, Human, Animal, Machine is a philosophical inquiry, not a leadership manual. Yet her insights are urgently relevant for executives steering digital transformation, AI ethics, and organizational trust.

O'Gieblyn warns that when we fail to question the metaphors built into our technologies, we quietly construct systems that conceal power, distribute responsibility without ownership, and undermine human agency. This isn't theoretical. It's already happening in the design of employee surveillance tools, customer service automation, and AI-driven decision-making systems.

Understanding this dynamic is no longer optional. It’s a leadership imperative.

1. Your Metaphors Already Make Decisions for You

You think metaphors just dress up your language? O'Gieblyn shows they program how you think. Words like "resources," "engines," and "systems" aren't surface-level abstractions—they structure decision-making, set behavioral expectations, and shape your organization's culture.

Label your team a "system to optimize," and unpredictable behavior, creative tension, emotional complexity, and divergent thinking will become deviant rather than valuable. This logic subtly discourages dissent, narrows thinking, and erodes your team's judgment and morale.

Leaders: Try This Tomorrow

  • Block 30 minutes to review your last three internal communications. Circle metaphors for people, work, or decisions. What worldview do they promote?
  • Choose one recurring meeting to pilot alternative framing. Replace mechanical or transactional language with metaphors drawn from ecosystems, craftsmanship, or care.
  • Ask your leadership team: "What's one metaphor we've outgrown—and what should replace it?"

2. Fake Empathy Creates Real Problems

O'Gieblyn's story of bonding with a robot dog sounds harmless—until you realize how easily interface design can trigger emotional investment in things that can't reciprocate or care. Now scale that across customer service, onboarding, and employee tools.

Your systems might "empathize" with users via tone, timing, or design. But if no one behind the screen takes responsibility, you've created the illusion of care—without the reality of accountability. That gap breeds frustration, alienation, and ethical ambiguity.

Leaders: Take Action Now

  • Identify one customer or employee-facing workflow where empathy is simulated. Add or reinforce access to a real human.
  • Run a quarterly "trust audit": Where do our interfaces imply care without the capacity to act on it?
  • Define internal guidelines for what your AI is—and isn't—allowed to promise on behalf of the organization.

3. Asking for Feedback Without Sharing Power Breeds Cynicism

Too many organizations simulate listening. O'Gieblyn's point is sharp: fake participation does more harm than silence. When dashboards and feedback loops don't influence actual decisions, people stop believing they ever will.

Even well-intentioned engagement tools can undermine trust if they don't deliver a visible impact. Employees and customers quickly learn the difference between being asked—and being heard.

Leaders: Start Here

  • Audit your feedback mechanisms. Which ones produce decisions? Which ones stall out?
  • Redesign at least one high-visibility feedback process to include participant influence or co-ownership.
  • Communicate outcomes clearly. When change happens, attribute it to the feedback that drove it; when it doesn't, explain why.

The Strategic Choice

O'Gieblyn doesn't offer a tech policy or change framework. What she offers is harder to implement: a demand for intellectual honesty. She shows how language reveals what systems protect and what they erase.

Metaphors may seem small, but they are operational. They define whose experience counts, whose labor matters, and who gets to decide. In the age of automation, those stakes are only getting higher.

The strategic question isn't whether to adopt AI. It's whether the assumptions behind your implementation preserve human dignity, agency, and judgment, or replace them with facsimiles.

Because the danger isn't that machines will outthink us. It's that we'll forget what it means to think like humans at all.

Ben Mahler

Creative with sports, branding, agency, and Oxford comma experience

3mo

I love your visual branding for these posts-- I stop every time. 👏

To view or add a comment, sign in

Others also viewed

Explore topics