Building the Stack – A Road-map to Digital Sovereignty, Part 6: Who Shapes the Truth?
Photo: Alex Andru

Building the Stack – A Road-map to Digital Sovereignty, Part 6: Who Shapes the Truth?

Models as Mediators of Knowledge

In an increasingly AI-mediated world, the way we access knowledge, interpret data, and make decisions is shifting. What used to be the domain of experts, archives, and institutional memory is now increasingly handled by language models, search proxies, and digital assistants. These systems do not just relay information — they frame it. They summarize, filter, and prioritize. They respond based on patterns in their training data, fine-tuning choices, and alignment layers — each of which subtly shapes what counts as relevant, accurate, or credible.

This transformation has profound implications. Large language models now serve as research assistants, tutors, legal guides, and even mental health supports. But few users understand how these models generate their answers — or whose worldview they encode. The result is a new kind of soft power: not the power to coerce, but the power to define. The power to decide which truths surface and which remain buried. In this world, governance is not just about protecting privacy or securing infrastructure. It is about protecting epistemology — the integrity of how we know what we know.

Alignment: Who Decides What’s “Safe”?

The growing debate over AI “alignment” is framed as a safety concern. How do we ensure AI doesn’t mislead, manipulate, or cause harm? But embedded in every alignment strategy is a deeper question: aligned to whom, and by what standard? Techniques like reinforcement learning from human feedback (RLHF), system-level prompt filtering, and rule-based content moderation are all attempts to constrain model behaviour. Yet these constraints reflect the values of whoever designs them — not necessarily the values of the user, the public, or the institution deploying the model.

In the name of safety, alignment protocols often encode the normative frameworks of a few centralized actors — typically those who control the model’s fine-tuning pipeline. OpenAI, Anthropic, and Google DeepMind each describe their alignment goals in abstract terms: helpful, harmless, honest. But how those terms are operationalized is rarely transparent. What counts as “harmful”? Whose “helpfulness” matters? These questions are especially urgent for public sector deployments, where models are used to assist in law, education, social services, and healthcare. Alignment in these domains is not just a technical issue — it is a democratic one.

Epistemic Power and the New Gatekeepers

In philosophy, epistemic power refers to the authority to define and shape what counts as knowledge. Traditionally, this power rested with institutions: universities, courts, legislatures, libraries, scientific communities. These bodies developed standards of evidence, peer review, and interpretive frameworks to ensure that knowledge was contested, transparent, and rooted in public process. Today, much of that power is migrating to AI systems — and by extension, to the private actors who control them.

When a model decides which documents to summarize, which facts to omit, or how to answer a moral question, it exercises epistemic judgment. But unlike traditional institutions, it does so without deliberation, accountability, or appeal. The logic is encoded in weights, filters, and post-processing scripts — often invisible to users and immune to challenge. In effect, we are building unaccountable epistemic engines that intermediate public reasoning. And as these systems become embedded into decision-making processes across sectors, their influence will shape not only how we see the world — but what we believe is possible.

From Explainability to Contestability

Explainability — the ability to interpret a model’s decision — is often touted as a solution. But in systems as complex and emergent as foundation models, explanations can be shallow or misleading. What we need is something deeper: contestability. The ability to interrogate, audit, challenge, and adapt how a model behaves in practice. This requires tools, institutions, and processes that allow diverse stakeholders — not just engineers — to participate in shaping AI systems.

Public sector deployments should not rely on fixed alignment layers inherited from vendors. Instead, they should develop domain-specific alignment protocols, grounded in law, community norms, and contextual expertise. This means investing in policy-tuning workflows, human-in-the-loop evaluation, participatory red-teaming, and robust opt-out mechanisms. Alignment should not be a one-time act — it should be a living process, subject to oversight and revision.

Governance must also extend beyond the model to the surrounding infrastructure: data pipelines, evaluation metrics, logging systems, and update mechanisms. Without control of these layers, public institutions will remain epistemically dependent — forced to trust black boxes they cannot challenge.

Sovereignty Is Still Ours to Shape

Across this series, we’ve traced a central theme: that control over digital systems — from infrastructure to models to governance — is not just a technical question. It is a question of agency. Who builds the stack? Who maintains it? Who decides what it can say, and to whom? These are political questions, cultural questions, generational questions. But they are also solvable ones.

Sovereignty does not require isolation. It requires choice. It requires building systems that are transparent enough to audit, modular enough to reconfigure, and participatory enough to reflect the public good. Whether it’s open models, sovereign clouds, or contestable alignment, the goal is the same: to embed democratic values into the very logic of the machines we are beginning to trust with our knowledge, our decisions, and our futures.

The future is not yet encoded.

References

  • Binns, R. (2022). Algorithmic Accountability and Contestability by Design. Philosophy & Technology.
  • Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence.
  • Raji, I. D., & Buolamwini, J. (2019). Actionable Auditing: Investigating the Impact of Public AI Accountability. FAT* Conference.
  • Brundage, M. et al. (2020). Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims. FHI / OpenAI / GovAI.
  • Chartier, R. (2025). Building the Stack: From Cloud Dependency to Sovereign Control.

If you work in public policy, AI strategy, or digital governance, this chapter is your call to reflection. We examine how alignment shapes meaning, how epistemic power is being quietly centralized, and what it will take to design contestable, accountable AI.

Download the Full Whitepaper: Building the Stack: From Cloud Dependency to Sovereign Control (PDF)

Like what you're reading?

Subscribe & Join the Conversation. Share with policymakers, researchers, or ethics leads. If this chapter resonated, please repost and amplify the signal. Interested in speaking, advisory, or co-designing alignment protocols? Let’s talk.

Let’s Discuss:

Who should decide what an AI system can say, censor, or prioritize — and how should that decision be made? What does “contestability” look like in your sector?

Tag someone working in AI oversight, digital rights, or public reasoning who should weigh in. We’re open to panels, governance pilots, and collaborative audits.


This Concludes the Sovereign Stack Series

Up next: AI: Tuning the Machine — a new series exploring how we shape the behaviour, intent, and cognition of digital intelligences. We’ll move beyond infrastructure into ethics, alignment, intentionality, and how to raise a new generation of machines with care.

Michael Aronsen

HPC, Unix, Linux, FreeBSD

1w

This is likely the most important future challenge to handle for policy makers and anyone working with LLMs. I liken it to the moral questions about self driving cars, when do we let computers make important decisions and who is responsible for such decisions. This will not be easy to solve, but it needs a solution or more likely, several solutions.

Like
Reply
Shawna Tregunna

Driving Revenue Growth for B2B Companies | CEO @ Acclivity Agency | Exited Founder | Speaker | 40U40 | Expert in ABM & B2B Marketing Strategies & Execution | Gives a Damn

1w

Great post! Trust is about building systems people can question, understand, and hold accountable. That’s how you get adoption that lasts!

Roy, I've enjoyed your entire series. Great to see your insights into this very timely topic.

Roy Chartier

Founder | Qvelo | Computing for Humanity | AI Infrastructure | HPC | Digital Sovereignty | System-of-Systems Strategy |

2w

Who should decide what an AI system can say, censor, or prioritize — and how should that decision be made? What does “contestability” look like in your sector?

To view or add a comment, sign in

Others also viewed

Explore topics