Language Shapes Power at Work
AI no longer just powers our tools. It shapes how we think, how we lead, and how we assign value to human life inside organizations.
As AI becomes embedded in everyday business decisions, leaders face a deeper risk: adopting machines' metaphors to describe people, relationships, and power. This shift doesn't begin with technology policy. It begins with language.
When you call employees "systems," design AI that simulates empathy, or collect feedback you don't intend to act on, you are not just streamlining operations, you are redefining leadership itself. You are shifting from stewarding human potential to managing predictable inputs. Over time, these subtle changes recode what it means to be human at work.
Why This Matters Now
Meghan O'Gieblyn's God, Human, Animal, Machine is a philosophical inquiry, not a leadership manual. Yet her insights are urgently relevant for executives steering digital transformation, AI ethics, and organizational trust.
O'Gieblyn warns that when we fail to question the metaphors built into our technologies, we quietly construct systems that conceal power, distribute responsibility without ownership, and undermine human agency. This isn't theoretical. It's already happening in the design of employee surveillance tools, customer service automation, and AI-driven decision-making systems.
Understanding this dynamic is no longer optional. It’s a leadership imperative.
1. Your Metaphors Already Make Decisions for You
You think metaphors just dress up your language? O'Gieblyn shows they program how you think. Words like "resources," "engines," and "systems" aren't surface-level abstractions—they structure decision-making, set behavioral expectations, and shape your organization's culture.
Label your team a "system to optimize," and unpredictable behavior, creative tension, emotional complexity, and divergent thinking will become deviant rather than valuable. This logic subtly discourages dissent, narrows thinking, and erodes your team's judgment and morale.
Leaders: Try This Tomorrow
2. Fake Empathy Creates Real Problems
O'Gieblyn's story of bonding with a robot dog sounds harmless—until you realize how easily interface design can trigger emotional investment in things that can't reciprocate or care. Now scale that across customer service, onboarding, and employee tools.
Your systems might "empathize" with users via tone, timing, or design. But if no one behind the screen takes responsibility, you've created the illusion of care—without the reality of accountability. That gap breeds frustration, alienation, and ethical ambiguity.
Leaders: Take Action Now
3. Asking for Feedback Without Sharing Power Breeds Cynicism
Too many organizations simulate listening. O'Gieblyn's point is sharp: fake participation does more harm than silence. When dashboards and feedback loops don't influence actual decisions, people stop believing they ever will.
Even well-intentioned engagement tools can undermine trust if they don't deliver a visible impact. Employees and customers quickly learn the difference between being asked—and being heard.
Leaders: Start Here
The Strategic Choice
O'Gieblyn doesn't offer a tech policy or change framework. What she offers is harder to implement: a demand for intellectual honesty. She shows how language reveals what systems protect and what they erase.
Metaphors may seem small, but they are operational. They define whose experience counts, whose labor matters, and who gets to decide. In the age of automation, those stakes are only getting higher.
The strategic question isn't whether to adopt AI. It's whether the assumptions behind your implementation preserve human dignity, agency, and judgment, or replace them with facsimiles.
Because the danger isn't that machines will outthink us. It's that we'll forget what it means to think like humans at all.
Creative with sports, branding, agency, and Oxford comma experience
3moI love your visual branding for these posts-- I stop every time. 👏