When Meaning Is On Trial
Meta, Antitrust, and the Governance Crisis No One Is Naming
This week, Meta stands trial. Not just for what it bought, but for what it made the rest of us believe.
The Federal Trade Commission is challenging the company’s acquisitions of Instagram and WhatsApp, alleging that Meta used them to neutralize potential rivals and cement a monopoly (FTC v. Meta, 2020). But this trial isn’t just about competition law. It’s not about whether Facebook was smart or Instagram was small.
When the System Loses Its Language
In a recent article, I described AI Anomia as “impaired collective ability to name AI-related tools and applications ethically and meaningfully.” It isn’t merely a condition where leaders and regulators lose their grip on the terms that once guided judgment--fairness, openness, responsibility, harm—they actively shape those terms to their advantage. The complexity of the technology is unavoidable. But where that complexity resides is a leadership decision, and most leaders increasingly shift this burden to the least empowered parts of the organization, by design. This has happened in almost every sector throughout history (oil and gas, automotive, retail, etc.)
The Meta trial is a case study in institutional anomia.
The FTC argues that Meta’s acquisition of Instagram in 2012 wasn’t merely a business deal but a strategic effort to “neutralize a competitor.” They cite internal emails. They trace patterns of behavior. They suggest that Meta didn’t just grow—it grew by eliminating friction.
Meta’s defense is not without merit. The acquisitions were approved. The digital ecosystem has changed. TikTok, X, and BeReal all now compete for attention. From Meta’s view, what once looked like dominance now looks like survival in a volatile market.
That tension is exactly the problem: the vocabulary of competition has evolved, but our tools for deciding what counts as harm have not. The FTC is trying to reimpose meaning. Meta is insisting the past is settled. And courts are stuck in the middle—trying to decipher what words meant then through the lens of what they permit now.
The Strategic Management of Meaning
This isn’t a legal drama—it’s a linguistic one.
The FTC will argue intent: that Meta acquired rivals to consolidate power. Meta will argue inevitability: that the landscape changed, and so did the stakes.
But the deeper question is how those arguments are made legible at all—and to whom.
Language isn’t just a byproduct of power. It’s one of its most enduring tools. And in an environment of AI Anomia, strategic ambiguity becomes a form of insulation. The less clearly a company defines its role, its responsibilities, or even its category, the harder it becomes to regulate.
This is what makes Joel Kaplan’s role at Meta so central—not as a coder or product visionary, but as a semantic strategist. He doesn't run the systems. He scripts the narratives. He ensures the company’s behavior can always be reframed—sometimes years after the fact.
When Governance Fails to Keep Pace
In AI Anomia, I described a growing leadership dilemma: when systems evolve faster than our ability to name them, govern them, or hold them accountable, institutional coherence begins to erode. This erosion isn’t always visible at first. It manifests quietly—in the language we use to explain complexity, and in the assumptions we stop questioning. Terms like fairness, consent, and harm don’t disappear. They degrade. They get abstracted, softened, made compatible with scale.
This is the condition under which governance begins to misfire—not due to the absence of law, but because the language law relies on has been strategically repurposed. In that space, accountability becomes difficult not because the facts are missing, but because the categories we use to interpret those facts have drifted. Fairness becomes optimization. Consent becomes click-through. Harm becomes an unfortunate design trade-off.
The Meta trial illustrates this breakdown with unusual clarity. The FTC is not only attempting to prove intent—it’s attempting to reassert definitions that have been steadily eroded by more than a decade of internal narratives, shifting platform policies, and public reframing. Meta, in turn, isn’t just defending its actions. It’s defending its authority to define what those actions meant—and to do so on its own terms, long after the fact.
What we’re seeing is not merely regulatory lag. It’s institutional misalignment—a growing gap between the frameworks that once guided oversight and the semantic terrain now occupied by the companies they seek to regulate. This isn’t a question of whether platforms changed the rules. It’s a question of whether they changed the language the rules depend on, and whether anyone noticed in time to intervene.
Why Courts Struggle with Language, Not Just Law
The courtroom isn’t adjudicating semantic drift. It’s adjudicating intent—whether Meta knowingly acquired competitors to prevent market threats, and whether it misled regulators in the process.
But that’s what makes this case so difficult: The very language used to assess intent has been eroding for over a decade.
The FTC is trying to reassert meaning—of “monopoly,” of “neutralize,” of “competition.” Meta is relying on the ambiguity it helped institutionalize.
The court will weigh facts. But the deeper conflict is linguistic: What did Meta mean by what it said—and when did that meaning change?
That’s not drift as a legal argument. It’s drift as a governance failure—a breakdown in our shared ability to name harm before it calcifies into strategy.
What We Need to Start Naming
We like to think of complicity as loud and visible. But most of the time, it’s quiet. It looks like staying busy. Choosing not to ask. Accepting just enough clarity to keep moving. Sometimes, collapse doesn’t arrive by force—it’s welcomed, even gratefully, because it spares us the discomfort of uncertainty.
What makes this trial so unsettling isn’t just what Meta did, or even how long it took to challenge it. It’s that the terms of judgment—competition, harm, neutrality, innovation—have already been altered. The longer we delay in confronting that shift, the more familiar these distortions become.
We’ve grown used to platforms shaping behavior while denying intent, rewriting internal decisions as external inevitabilities. We’ve accepted public apologies that acknowledge “trust” without defining what was broken. We’ve stopped asking what words like integrity, openness, or user protection actually require—so long as the interface still works.
This is the real urgency: not that we’re on the verge of unprecedented harm, but that we’ve already normalized so much of it in plain sight. Meaning hasn’t been lost. It’s been negotiated away—quietly, gradually, and often with our consent.
If AI Anomia describes the institutional failure to name complexity clearly, then the challenge before us is to reclaim the responsibility to name what matters—before it’s renamed for us.
We need to start naming:
We need to ask:
The Meta trial may result in a ruling. But the deeper challenge is still ahead: To decide whether we will continue outsourcing the burden of meaning to those most fluent in deflection—or whether we’re willing to do the slower, more difficult work of making meaning intelligible again.
Because the real question isn’t just whether Meta crossed a line. It’s whether we still remember where we drew it.
A Courage Break
We didn’t lose the language all at once.
We surrendered it—gradually, then suddenly.
Not just under pressure, but sometimes gratefully—because ambiguity feels easier than accountability, and complexity is always someone else’s job.
Now the question isn’t just what Meta meant. It’s whether we still know what we mean when we say trust, or safety, or public interest—and whether we’re ready to name what those words should cost again.
Want to go deeper?
This post extends ideas from Driving Data Projects, where I explore how misaligned metrics and vague terminology quietly derail data initiatives, and Driving Your Self-Discovery, which offers tools for building reflective capacity as a core leadership skill in complex, tech-driven environments. In the Meta trial, we see the same pattern at scale: when organizations lose grip on language, they lose the ability to steer with integrity.
How does it show up in your team? I work with organizations to diagnose semantic drift, align their data language with their values, and build governance that scales trust, not just outputs.
Feel free to get in touch or explore how I support teams navigating these tensions.
Recommend This Newsletter
Lead With Alignment is a periodic newsletter read by data professionals, decision-makers, and quietly courageous change agents who want to explore the concepts, strategies, and tactics that help keep values in the room. If this sparked a pause for you—share it with someone who needs one too.
Data Scientist specializing in integrating and making sense of HR data.
3moLove the article. What you're describing sounds like an institution or industry gaslighting the public and I think you're right.
Consultant @ Tech Strategy | Women in Tech
3moSo we'll described. Specially where you say - "Meta, in turn, isn’t just defending its actions. It’s defending its authority to define what those actions meant—and to do so on its own terms, long after the fact." Love it!