Essay 2: Agreeing on Values and Principles
Executive Summary
This second article in our series examines the challenge of ensuring equitable adoption of AI technologies across global markets, particularly in underserved communities. It identifies three key barriers: capital requirements, skills and infrastructure gaps, and trust issues stemming from historical inequities. The article proposes a layered approach to AI implementation, highlighting that while investment has concentrated on base model development (Layer 1), the greatest opportunities for impact lie in fine-tuning for specific applications (Layer 2) and deployment use cases (Layer 3). Policymakers must prioritize interventions that address market failures and ensure AI benefits reach those most in need.
Ensuring Equitable AI Adoption Across Global Communities
Adoption of general technologies often takes time, and diffusion into underserved markets is not guaranteed.[1] Often those are the markets most in need of new innovation: for example, neglected diseases like Chagas disease, which affects twice the number of people who died as a result of the global COVID pandemic, yet remains underfunded due to affected populations primarily living in the developing world. [2]
Ensuring that publicly beneficial technology can reach geographies and priority areas most affected by significant market failures—from mental health and aging populations to women’s health and more—will require significant policy and social intervention.
The Triple Challenge of Equitable Adoption
There are several practical challenges in these cases that add to the existing difficulties of diffusion and adoption.
1. The Capital Barrier
Many of these innovations (like AI) require significant capital, and primarily in private institutions today rather than governments. The fiduciary duty to generate profit for shareholders is, in a case where populations cannot afford significant costs, put at odds with the public interest to benefit significantly from a new technology. The market is needed to support innovation, yet people who might benefit from it most simply can’t afford it.
Potential Solutions: Perhaps it is worth looking at policy levers to help alleviate these constraints to encourage pro-social behaviors while also reducing burdens on companies themselves — for example with tax credits.
Historical Precedent: When the AIDS epidemic was ravaging the African continent in the 90s, and pharmaceutical companies were charging costs that were unaffordable for governments struggling to help treat their populations, South African legislators passed a law that suspended IP rights and enabled generic drugs to be provided at-cost. [3]
Key Question: Where and when is it suitable for policymakers to take action to ensure costs are not prohibitive?
2. Skills & Infrastructure Gaps
Many of these groups and populations aren’t equipped to benefit from AI models: both in terms of skills and physical infrastructure like compute. Historically there have been similar lags when it comes to Internet connectivity, for example.
The Asian Leapfrog Example: And yet, many countries in Asia have now leapfrogged the US, UK, and Europe in terms of connection speeds and availability. How did this happen? Through strong industrial policy capitalizing on mobile networks via cell towers that don’t require fixed lines and cater to the way the majority of populations in Asia were adopting Internet use.
The hyper-local methods of adoption enabled uptake in a way that was cheaper and faster - for governments or for Internet users as mobile devices were far less costly and more accessible than PCs. [4] Education is particularly crucial when it comes to AI, as general models will need to be fine-tuned to suit local or otherwise specific use cases.
3. Trust & Social Acceptance
The importance of social proof and change cannot be underestimated. If AI is primarily introduced as a surveillance tool or in ways that yield biased and/or inaccurate and/or harmful outcomes for certain populations, it is unlikely to be trusted, which can delay utility.
The Legacy of Mistrust:
Whether or not discrimination in how AI is disseminated is conscious or intentional, it has a lasting effect on uptake
The Tuskegee incident and Henrietta Lacks family's experience have had lasting impacts on communities of color in the U.S.
These historical traumas have understandably led to resistance to vaccinations and other medical interventions
At a time when distrust in institutions is higher than ever before, it is crucial to ensure that innovation is socially accepted and caters to underserved populations rather than leaving them to be an after-thought.
Successful Approaches: Policy interventions including those piloted during the COVID pandemic where influencers in historically excluded communities took part in campaigns to encourage others to get vaccinated. [5]
Beyond Market Forces: The Case for Intervention
Public benefit from private innovation can happen, but it won’t happen equitably or reach those most in need without policy interventions in foreseeable market failures.
The Dual Outcome:
In some cases, that leads to an addressable market being left on the table that can be unlocked
In others, it simply requires governments to assert that public good trumps and requires suspending the market
Regardless, it isn’t an inevitability and can easily be avoided through a deeper study of policy and social interventions of historical diffusion of scientific benefit. One might argue that slow diffusion to historically excluded communities is in some way a moral harm, compounding existing injustice. It’s why public officials must not only pay attention to safety but also ensure that regulations are equally suited for equitable distribution and applications of AI.
A Layered Approach to AI Interventions
If we hypothesize that AI-powered science tools are instruments that require an application layer for local or specific utility, we can look at policy and market interventions in a stack:
Current Investment Imbalance
To-date, both public and private investments have largely been clustered in layer 1, even though that is the hardest place to have impact.
Reality check:
Larger labs are funded by companies like Google and Microsoft which are able to provide capital intensive investments
It is unlikely startups or competitors or nationally funded AI efforts will be able to overtake them at this stage
Frontier AI labs are betting on generality rather than solving specific challenges in niche, specialist areas (with some exceptions, like science)
The Specialist Opportunity
What’s interesting is that general software companies haven't had much success with tailoring for specific industries, particularly regulated industries or niches that require specialist knowledge where they have relatively little expertise.
That leaves a gap when it comes to specialists in verticals like health, energy, law, and more to innovate in layers 2 & 3, even though funding has been slower.
Layer 1 might have been where the VC flywheel has been, but returns are harder to come by and prove there while layers 2 & 3 remain largely untapped, where multiple winners per vertical are likely. **As a note, most of our personal angel investments in AI are in layers 2 & 3.
A Call to Action
Policymakers and investors might be wise to focus on where they feel AI can have the most transformative impact, and choose to double-down from policy intervention and investment perspectives to collectively drive energy toward tangible uses and outcomes that are publicly beneficial. That would be a win-win scenario.
This is the second article in a three-part series exploring AI as a transformative scientific instrument and the policy considerations needed to maximize its societal benefits. You can find the first article, Learning from the Past, here https://www.linkedin.com/posts/dorothychou-_ai-science-innovation-activity-7321225430450020354-cqNU?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAi2WSsBheQQ8yfogvRWh-NfDa15W-NAoeA
About the Authors
Dorothy Chou is the Director of the Public Engagement Lab at Google DeepMind, where she helps enable meaningful public discussions through translating complex AI concepts. Dorothy is passionate about using technology as a force for positive change, both through her policy work and as an angel investor supporting underrepresented founders. With interests spanning bioethics and technology governance, she enjoys building bridges between technical innovation and social responsibility, working toward a future we can look forward to.
Nicklas Berild Lundblad is the Director of Public Policy at Google DeepMind, where he explores powerful questions at the intersection of technology, policy, and society. He thrives on connecting diverse stakeholders around shared visions for AI's future, describing his work as "a mix of foresight, insight and listening." An enthusiastic ambassador for thoughtful AI development, Nicklas enjoys facilitating conversations that bridge technical innovation with social impact, finding deep satisfaction in building collaborative networks that shape positive technological futures.
Terra Terwilliger is the Director of Strategic Initiatives at Google DeepMind, where she brings her Georgia roots and down-to-earth perspective to complex AI topics. As a strategic thought partner to the COO, she finds purpose in building a shared imagination about AI-enabled futures. Terra is passionate about harnessing technology's potential to improve lives, working with diverse teams to ensure AI benefits humanity in meaningful ways.
The views expressed in this article represent the authors' personal perspectives and not necessarily those of their affiliated organizations.
© 2025 Google DeepMind
Director of Growth Strategies at the Mayor's Office of Economic Development
2moThanks for sharing, Dorothy