The Case for Socio-Technical Standards in Digital Health and AI

The Case for Socio-Technical Standards in Digital Health and AI

🌀 THE RIFF | Edition #9

As the world becomes more interconnected, population health—long the foundation of productivity and sustainability—faces an unprecedented opportunity to evolve. Treating illness alone is no longer enough. We must design systems that actively promote well-being, equity, and resilience. That means rethinking how we govern, collaborate, and design through technology.

Standards as the Backbone of Trust

Global standards are essential. They ensure that digital innovations—especially in healthcare—are scalable, secure, and accessible. But the deeper challenge lies beyond the technical layer.

Digital health and AI can widen access and improve coordination. Yet, left unchecked, they risk reproducing bias, excluding communities, and deepening inequities. The UN, WHO, and OECD have all called for stronger digital cooperation to bridge divides and accelerate the Sustainable Development Goals. But declarations alone don’t solve implementation challenges. Healthcare illustrates this tension more than any other sector: it is where digital cooperation is most urgent—and most elusive.

So why, in a domain as regulated as healthcare, do we so often fail to embed digital technologies into frameworks built precisely for collaboration?

From Standards to Governance

Part of the answer lies in the standards themselves. Too often, they are narrowly technical when health is profoundly sociotechnical. Health involves people, relationships, ethics, and context—not just software and systems.

When we ignore this, we risk creating tools that are technically precise but socially blind—solutions that may work brilliantly for one group but fail another, or worse, automate exclusion at digital speed.

This is where Europe offers a path forward. With the European Health Data Space (EHDS) and the AI Act, we are developing frameworks that could, for the first time, embed digital health within a broader vision of human-centred, rights-based innovation. Both are grounded in the New Legislative Framework: laws define essential requirements; standards interpret them.

That principle opens up a critical opportunity—and responsibility. If standards are where [regulatory] interpretation happens, then inclusivity must be designed in from the start. This is where Integrative Data Governance becomes essential.

Integrative Data Governance (IDG)

IDG is not just another compliance layer. It embeds coordination, context, and care into how we manage data, design systems, and deploy innovation. It recognises that datasets are not neutral, and that governance can either reinforce or reduce inequity.

Under IDG, data is not only collected—it is recycled during its various primary uses for ethically sound secondary uses: research, planning, innovation that serves the many, not the few. Crucially, this reuse is anchored in patient agency, equity safeguards, and transparent oversight.

Applied to EHDS and the AI Act, IDG can transform fragmented datasets into shared governance ecosystems. It can ensure that standards don’t just define protocols—they define pathways: pathways to care, participatory design, and more inclusive innovation.

In practice, this means:

  1. Designing AI systems that are auditable, contextual, and adaptive.

  2. Setting data access rules that prioritise equity alongside efficiency — showing that inclusion, when guided by socio-technical standards, strengthens rather than slows down systems.

  3. Embedding standards that reflect diversity and lived experience — without them, technical performance looks strong in testing, but collapses when deployed.

The innovation here is as much institutional as it is technical. It is about how we organise trust, negotiate trade-offs, and ensure accountability across complex systems.

The Deeper Risks We Face

Without integrative governance, two risks stand out for AI models and their systems.

First, premature regulation. If laws are enforced before robust standards are in place, their aims can be undermined. Diluting responsibilities for model providers, for example, makes obscurity easier than clarity.

Second, synthetic truth contamination. As AI models are increasingly trained on AI-generated content, originality and factual accuracy degrade. The knowledge base becomes recursively polluted. Distorted information is recycled, hallucinations multiply, and costs rise. This is not a minor glitch. It is a structural vulnerability—and some models already mask it.

Left unchecked, these dynamics render systems unsustainable. Without intervention, they will erode the foundations of responsible innovation.

We are acutely aware of these risks. That is why we have worked tirelessly on the General Purpose AI Code of Practice (GPAI CoP) together with the EU AI Office. Building on the success of this collaborative effort, we are now developing a new GPAI CoP on Transparency as a placeholder mode of compliance to bridge the gap between GPAI model provider obligations coming into effect and the adoption of standards (in a few years from now). While not legally binding, GPAI model providers can rely on these Codes to demonstrate compliance with GPAI model provider obligations in Articles 50, 53, and 55, until standards are developed.

Both Codes establish governance guardrails and introduce normative tools and mechanisms designed to guide enforcement. Crucially, we are developing these compliance mechanisms in a way that encourages the participation of the communities they will affect, empowering their inclusion rather than imposing solutions from above.

Living Standards for a Living Future

The way forward is to design dynamic, living standards—standards that evolve with the pace of change. This is the spirit of the UN Global Digital Compact: governance that is inclusive, adaptive, and continuously accountable.

Because standards are more than technical protocols. They are economic instruments. They determine access to innovation. And in doing so, they shape equity.

Building Bridges

So, where do we go from here?

We build trusted bridges—held to high engineering standards.

  • Bridges between developers and regulators.

  • Bridges between AI systems and the societies they serve.

  • Bridges between technology and the values we choose to uphold.

The future of population health is not about new tools alone. It is about new agreements, new responsibilities, and a new ethic of collaboration.

Standards are not a bureaucratic formality. They are our social contract for the digital age.

Let’s make them count.

Join Socio-Technical Standards development in cancer care

The IEEE P3493.1™ Standard Framework for Secure, Compliant, Coordinated, and Inclusive Healthcare Data Recycling: Cancer Care.

Healthcare Data Recycling (HDR) facilitates the coordinated sharing of comprehensive clinical data throughout the patient care experience, helping to ensure the development of a cohesive data set reflective of care context and outcomes. The purpose of this project is to streamline the repurposing of healthcare data for secondary use in clinical research and other healthcare applications. It provides context-sensitive information to diverse collaborators, facilitating the identification of patient-specific care needs to deliver individualised services. The goal is to help ensure access to properly labeled, tested, and evaluated datasets for use in clinical research and healthcare delivery. The standard also supports the creation of secure 'sandbox ecosystems' for developing and testing digital health functionalities like Artificial Intelligence (AI).

Dr Dimitrios Kalogeropoulos is Chief Executive, Global Health Digital Innovation Foundation, member of the WHO/Europe Strategic Partners Initiative for Data and Digital Health, Health Executive in Residence & MBA Health External Advisory Board, UCL Global Business School for Health, Incoming Chair, IEEE European Public Policy Committee, Chair, Industry Connections program IC24-015-01, AI for Improved Public Health and Climate-Resilient Health Systems, Chair, IEEE SA Global Public Health Forum, Chair, IEEE P3493.1 Standard Framework for Secure, Compliant, Coordinated, and Inclusive Healthcare Data Recycling, Chair, Ethics subcommittee Global Mobile Health App Registry, Member of the EU AI Office General-Purpose AI (GPAI) Code of Practice Plenary, and Member of the IEEE Global Policy Caucus.

 

Dmytro Biletskyi

Founder of Epic Rose | Driving Healthcare AI & Data-Driven Business Transformations | We Boost Business Efficiency through Automation, AI, and Beyond

4d

We’re facing some of the biggest and most complex challenges around safety, reliability, and accountability in this fast-moving AI space. You really expanded my perspective on how governance can work here — many see standards as rigid and fixed, but governance is what must evolve with the field. A very interesting and important point of view — thank you for sharing, it was insightful to read.

The current mood in the U.S. Is to not accept outside regulation or standards. The withdrawals from U.N. agencies and the reductions in funding, sometimes dramatic, is exhibit A.

Like
Reply

To view or add a comment, sign in

Explore content categories