Building the Stack – A Road-map to Digital Sovereignty, Part 5: The Mirage of Openness
The Comfort of “Open”
In the wake of growing concern over proprietary AI platforms, the open-source model movement has emerged as a hopeful counterbalance. Open-weight models promise transparency, portability, and reproducibility — offering a path toward public sector autonomy and community-aligned development. Governments, researchers, and civic technologists increasingly point to open models like LLaMA, Mistral, Falcon, and Mixtral as alternatives to closed solutions from OpenAI, Anthropic, and Cohere. But while open weights are a necessary foundation for sovereignty, they are not sufficient. Access to model weights is not the same as control.
Much of the public discourse around “open AI” focuses on licensing and access. Can you download the weights? Can you fine-tune the model? Can you run it on your own hardware? These are important questions — but they ignore the broader operational layers that define whether a model is truly sovereign. In practice, most open-weight models are accessed through Foundation-as-a-Service offerings: managed endpoints, hosted environments, and prebuilt APIs that obscure the underlying infrastructure. The result? A growing reliance on tooling that feels open, but functions as a walled garden.
Foundation-as-a-Service: Dependency in Disguise
The rise of Foundation-as-a-Service platforms like Hugging Face Inference Endpoints, AWS Bedrock, Azure OpenAI, and private GPUaaS providers has made it easier than ever to deploy large models without standing up infrastructure. These services offer convenience, cost predictability, and access to a catalogue of pre-trained models — often branded as “open.” But the deployment layer is not neutral. It determines how models are monitored, which metrics are logged, how memory and compute are provisioned, and how updates are rolled out.
By abstracting these layers away from the user, Foundation-as-a-Service recreates the same lock-in dynamics we saw in proprietary SaaS. The user no longer controls the runtime environment, fine-tuning stack, or evaluation tooling. Worse, many of these services enforce subtle usage restrictions — from throttling to filtering to licensing handoffs — that make the open nature of the model irrelevant in practice. You can access the weights, but you can’t meaningfully govern how the model behaves at scale.
This mirrors the shift we saw in cloud computing a decade ago: even if the components are open-source, the control plane is not. And the longer organizations rely on these platforms, the more they entangle themselves in opaque orchestration systems that cannot be reproduced, audited, or exited cleanly.
The Myth of “Open Enough”
The open model ecosystem is not monolithic. On one end, you have truly open models like Falcon and BLOOM, released under permissive licenses and designed for public benefit. On the other, you have partially open models like LLaMA and Gemma — weights available, but with restrictive usage terms, commercial limitations, and minimal visibility into training data or governance. Sitting in between are platforms that use open weights but wrap them in proprietary APIs, logging systems, and middleware — effectively making them open in name only.
This tiered openness creates confusion in policy and procurement. A model hosted on a sovereign-sounding endpoint may not be sovereign at all if the infrastructure, updates, and monitoring are centrally controlled. Similarly, the ability to download weights doesn’t mean those weights are usable without a specific inference engine, hardware stack, or fine-tuning workflow. In short: there is no such thing as a model without a context — and it is that context that determines who really holds the power.
Governments and public institutions must resist the temptation to equate open weights with open control. Instead, they should interrogate the entire deployment lifecycle: from fine-tuning to inference to auditability. A model that can’t be governed, explained, or verified on one’s own terms may be technically open — but it is not operationally sovereign.
Sovereignty Beyond the Weights
What does true control look like in the age of open-weight models? It means owning or governing the full stack required to deploy, tune, monitor, and evolve AI systems. That includes inference engines, model registries, version control, prompt audit trails, red-teaming pipelines, and structured feedback loops. It also includes the ability to modify model architecture, retrain on local datasets, and enforce data residency requirements at every stage of the lifecycle.
Crucially, sovereign AI is not about rejecting collaboration. It’s about ensuring the ability to say “no” when platforms no longer serve the public interest. That requires more than just open weights — it demands transparent infrastructure, reproducible workflows, and interoperable governance layers. Without those, even the most permissively licensed model can become a dependency wrapped in illusion.
Sovereignty also requires people. Open models must be accompanied by public-sector capacity — engineers, researchers, procurement officers, and auditors — who understand how to evaluate not just model performance, but the systems and incentives that surround it. Investments in tooling must be matched by investments in institutional literacy, so that governments can lead, not follow, in the AI era.
Building Real Autonomy
The path forward is not to abandon Foundation-as-a-Service entirely, but to constrain its role and build parallel paths for sovereign deployment. That means:
Just as governments maintain public roads, national research networks, and healthcare infrastructure, they must now invest in AI infrastructure that cannot be privatized by convenience. The open model movement has laid the foundation — but foundation is not the same as structure. Control is built one layer at a time.
References
If you work in AI policy, digital procurement, or cloud platform strategy, this chapter is your checkpoint. We explore the gap between access and control — and outline how to move beyond symbolic openness toward real autonomy in how AI is built, tuned, and deployed.
Download the Full Whitepaper: Building the Stack: From Cloud Dependency to Sovereign Control (PDF)
Like what you're reading?
Subscribe & Join the Conversation. Share with civic technologists, digital infrastructure leads, or policy advisors. If this chapter gave you pause, give it a repost. Interested in sovereign platform pilots, procurement reform, or infrastructure design? Let’s talk.
Let’s Discuss:
What’s your take — is “open weights” enough for institutional trust and independence? Where do you draw the line between access and real control?
Tag a colleague in procurement, AI strategy, or platform engineering who should weigh in. We’re open to co-developing playbooks, speaking on panels, or helping with stack evaluations. Let’s connect.
Coming Next Week
Part 6 – Who Shapes the Truth? AI Governance, Alignment, and the New Epistemic Power We’ll explore how centralized AI systems shape knowledge, bias, and institutional memory — and how auditability, contestability, and public alignment mechanisms can restore balance.
Founder | Qvelo | Computing for Humanity | AI Infrastructure | HPC | Digital Sovereignty | System-of-Systems Strategy |
1wI’d love to hear others’ thoughts: Should nations treat AI alignment as a form of constitutional power? What safeguards should exist when AI starts curating not just our information—but our understanding?