The Missing Layer: Why Context Is the Foundation of Data Governance
Data is everywhere. It flows through our health services, our schools, our justice system, and our news feeds. It fuels artificial intelligence, underpins policy decisions, and helps shape public opinion. Yet for all our talk about data as the lifeblood of modern systems, there is something we routinely forget. We forget that data does not speak for itself. It never has.
What gives data meaning is context. And context is the first thing we strip away when we try to make data travel faster, when we aim for scale, or when we search for universal truths in local facts. This missing layer is the quiet foundation that holds our systems together, even if we rarely give it a name.
It is time we paid attention to it.
Why Context Matters More Than We Admit
There are moments when context should be obvious. A person is overheard using a word that is widely considered unacceptable. The word is reported without explanation and becomes a headline. Outrage follows. But few people ask the harder question. Was it a quote? Was it ironic? Was it said in anger, in jest, or in private reflection? These things matter. Without them, we lose the ability to judge fairly.
The same is true in the world of data. When a dataset is published or shared, it is often taken at face value. We might describe it as high quality or low quality. We might call it complete or incomplete. But those are not universal judgements. They are context-specific. What counts as high quality in one setting may be completely unusable in another. A value that seems wrong may make perfect sense if you understand where it came from.
And yet, these kinds of discussions are not common outside specialist circles. In policymaking, the language around data is often overly simplistic. In journalism, it is frequently sensational. In technology, it is too focused on movement and transformation. The result is a kind of collective amnesia. We forget where data comes from. We forget what it was originally meant to do. And we forget how fragile meaning can be once context is lost.
The Lifecycle of Data Is Missing From the Conversation
One of the reasons we struggle with context is that we rarely talk about data as something that changes over time. But data has a lifecycle, just like any other asset. It is created, validated, used, transformed, stored, reused, and eventually discarded. Each of those stages involves assumptions and decisions. Each stage adds or removes context. And each stage changes what the data means, or at least what it is capable of supporting.
Take a routine blood test as an example. In the clinical setting where it is taken, the result is part of a story. It is linked to a patient, a reason for testing, a set of symptoms, a particular moment in time. But if that same data is later reused in an AI tool, or in a research study, or in a policy dashboard, the original story is often absent. What remains is just a number. The risk is that it will be taken out of context, or worse, treated as if it never had a context at all.
This is not just a technical issue. It is a governance issue. And it raises fundamental questions about how we manage information in complex systems. If we do not know the full story behind a piece of data, how can we judge whether its reuse is valid? How can we design controls that work in real-world conditions? And how can we build trust when we cannot explain the full journey that a data element has taken?
The truth is that we cannot. At least not reliably. And yet the idea of the data lifecycle is still missing from most public discussions, from many policies, and even from some system designs. That silence has consequences.
Interoperability Is Not Just About Movement
Over the past two decades, we have invested heavily in interoperability. The idea is simple. If systems can talk to each other, then services will become more efficient, information will be more complete, and users will benefit. In theory, that is hard to argue with. In practice, it is only half the story.
Too often, interoperability is treated as a technical challenge. The focus is on APIs, messaging standards, identity resolution, and real-time exchange. All of those things matter. But they are not enough. Because data that moves without context is often worse than data that stays put. It can be misleading. It can be misused. And it can give a false sense of confidence.
The harder challenge is to support purposeful interoperability. That means recognising that data is repurposed. A discharge summary created by a hospital may be reused in a community setting. A social care record may be used to inform population health planning. A patient-generated observation may end up in a predictive model. None of these uses are inherently wrong. But each one involves a change of context.
We need to start designing interoperability with that in mind. It is not enough to just move data. We must also move meaning. That means carrying provenance, preserving metadata, and being honest about uncertainty. It means supporting interpretation as well as transmission. And it means designing systems that help humans ask better questions, rather than just automating fast answers.
Governance Without Context Is Performance
All of this leads us to governance. A word that gets used often, but rarely examined. In many organisations, governance is treated as a matter of policy and procedure. It is a way of proving that decisions are being made properly. That risks are being managed. That compliance is being achieved.
But governance that ignores context is often little more than performance. It looks convincing on the surface. It may even pass audits. But when you look closely, you find gaps. You find controls that do not match how the system actually works. You find decisions being made without the information needed to judge risk properly. And you find that trust in the system is being stretched thin.
Real governance is not just about rules. It is about operationalising complex concepts. It is about helping people make sense of data in specific situations. That means understanding the purpose of the data. It means recognising its origin. It means being able to distinguish between appropriate and inappropriate uses.
To do that well, governance must embrace context. It must be grounded in a deep understanding of how data is produced, used, and changed. It must support judgement, not just enforcement. And it must be built into the fabric of our systems, rather than bolted on at the end.
The Human Cost of Context-Blind Systems
These issues may sound abstract, but they have real consequences. When context is lost, systems become brittle. Mistakes happen more easily. Biases go undetected. Assumptions remain hidden. And the people affected are often those with the least power to question or correct the results.
In healthcare, a misinterpreted data point can lead to the wrong treatment. In welfare, an out-of-context indicator can trigger an unjust penalty. In justice, a poorly understood pattern can support an unfair conviction. The risks are not theoretical. They are already playing out in small ways across thousands of systems, every day.
And as we move further into the world of AI, these risks grow. Machines do not understand context in the way humans do. They rely on signals and patterns, not stories and nuance. If we feed them data without history, they will draw conclusions without empathy. That is not a future we should accept lightly.
Rebuilding Trust Through Context
There is, however, a better path. We can start to rebuild systems that respect context. We can invest in metadata and provenance tracking. We can design tools that help users see the history of a data point. We can teach people to ask where data came from, what it meant at the time, and whether that meaning still holds.
This will take work. It will require changes to how we train professionals, how we design interfaces, and how we write policies. It will also require a shift in mindset. We need to stop chasing the illusion of clean, simple data. And we need to start embracing the messiness of real-world information.
That messiness is not a problem to be solved. It is a reality to be understood. And once we understand it, we can build systems that are more resilient, more honest, and more capable of supporting the public good.
A Call to Action
If we are serious about data-driven transformation, we must take context seriously. That means more than just talking about data quality. It means recognising that quality is always a judgement, made in relation to purpose. It means thinking about data as a living artefact, not a static object. And it means designing for reuse, not just collection.
We need governance frameworks that support transparency and judgement. We need interoperability models that carry meaning, not just payloads. We need metadata registries that preserve the logic behind data structures. And we need leadership that understands that trust is not built through compliance alone, but through clarity, accountability, and care.
None of this is new. These ideas have been written about for decades. But what is new is the scale and speed at which our systems now operate. The risks of context loss are no longer small. They are systemic. And if we do not address them, we will find ourselves solving the wrong problems, over and over again.
We have the knowledge. We have the standards. What we need now is the will to use them.
Conclusion: Remember the Layer Beneath
Context is not a nice-to-have. It is not something we can add in later. It is the layer beneath the data, the thread that connects meaning to purpose, and purpose to outcome. Without it, we are guessing. We are automating without understanding. We are governing without insight.
But with it, we have the chance to build systems that learn, adapt, and support better decisions. We have the chance to create technology that respects the people it serves. And we have the chance to build a culture of data use that is as careful as it is clever.
Context may be quiet, but it is not invisible. We just need to start looking for it.
Author: Dr Tito Castillo FBCS CITP CDMP CHCIO
Tito is the founder of Agile Health Informatics Ltd, a specialist health and care IT consultancy service. He is also Board Member of the British Computer Society Faculty of Health and Care (Strategy & Policy Lead).
Health informatician
2wWhich of the many contexts do you refer to?
Open to part time roles. Health Integration Architect (IHE and HL7)
2wIn interoperability I don't believe the context you describe is the layer beneath, it is a primary layer - as this is what majority of common interactions are focused around. However that starts to show issues, in the UK we have several standards groups and a couple of NHS orgs who tend to ignore context, exactly as you say. Majority of NHS orgs put this business context first (and confusingly they use same standards). I would say this group is missing governance around data standards, not context is covered (this has some issues). I see these different viewpoints as a problem which needs solving.
Al Governance, Risk & Compliance Architect | Protecting Licensed Professionals from Al Liability in Regulated Industries | Creator of The EEE Al Protocol™ | Former Judge & Dual-Certified Legal Specialist | Al+HI Champion
3w🔒 Govern Before You Automate™ AI may move fast. But it cannot carry your license. It cannot explain your judgment. It cannot answer when outcomes go wrong. That responsibility belongs to you. ⚠️ Know the Line If your work: • Affects a patient, client, or the public • Could be reviewed by a board, court, or regulator • Requires licensed judgment or legal clarity Then it is not automation. It is governance. You are not managing content. You are managing consequences. ✅ Ask Before You Approve 1. Could this cause harm or confusion 2. Could I defend this under scrutiny 3. Am I fully accountable, no matter what AI produced If the answer is unclear, the duty is yours. 🧠 Tools Do Not Create Risk. Misuse Does. This is not about fear. It is about structure. 🤓🤓I Built the Framework The EEE Protocol is not abstract. It is made for licensed professionals who must defend what AI touches. It does not teach how to build AI. It shows how to govern it. ⸻ 📌 If it is not governed, it is not defensible 📌 If your name is on it, you are responsible 📌 If you lead, your judgment must be recorded ⸻ #GovernBeforeYouAutomate 🔗 https://learn.medlegalprofessor.ai/widget/form/7d2J7UORutTDUvVZxwVB