There Is Hope: Global AI Initiatives Transforming Society Beyond Profit
By: Dr. Ivan Del Valle - Published: June 10th, 2025
Abstract
Over the past five years, artificial intelligence (AI) has increasingly been directed towards public-interest and nonprofit endeavors, delivering tangible social benefits across the globe. This paper examines how nonprofit and public-sector AI initiatives – in domains such as healthcare, education, climate change, humanitarian aid and disaster response – are improving lives and communities rather than focusing on financial return on investment. Major investments by technology companies, governments, international organizations, and universities have enabled AI tools to be deployed for early disease detection in low-resource healthcare settings, personalized learning for underserved students, climate change mitigation and adaptation, and more efficient disaster relief operations. We highlight case studies from multiple continents and socioeconomic contexts, documenting measurable positive impacts such as improved medical diagnoses, increased crop yields for smallholder farmers, lives saved through predictive disaster warnings, and greater access to education. Direct quotes and public statements from stakeholders involved in these projects underscore a growing global commitment to harnessing AI for social good. A formal analysis of literature and project outcomes reveals that AI-driven interventions – backed by robust cross-sector partnerships and ethical frameworks – are helping address the United Nations Sustainable Development Goals. While challenges remain in scaling these solutions and ensuring equity, the evidence surveyed illustrates a hopeful trajectory: AI technologies, when applied beyond profit motives, are transforming society and advancing human well-being. The paper concludes with a discussion on best practices, ethical considerations, and the need for sustained support to amplify these positive initiatives worldwide.
Introduction
Advances in artificial intelligence have often been driven by competitive business interests, yet a parallel movement has been growing – one that uses AI to serve humanity and the planet. In recent years, the narrative around AI has expanded beyond corporate profit and productivity gains to include inspiring examples of “AI for Good.” Policymakers, technologists, and civil society leaders increasingly recognize that AI can be a powerful tool to tackle societal challenges, from improving public health to combating climate change. The past five years in particular (2020–2025) have seen an upsurge of global AI initiatives focused on public interest outcomes, fueled by major investments and collaborations that prioritize social impact over business ROI. As Google CEO Sundar Pichai observed at a 2024 United Nations forum, the key question is whether AI will create a future “where everyone thrives, or one where the rich get richer and the poor remain stagnant” – and he argued that with deliberate action, AI can help build a more equitable world. This paper aims to document and analyze how a diverse set of nonprofit, academic, and public-sector AI efforts are indeed pushing toward a future where everyone thrives.
Multiple indicators point to the rapid growth of AI for public good. Corporate philanthropy and public–private partnerships have poured resources into this area: for example, Microsoft expanded its “AI for Good” portfolio with a new $125 million commitment to societal challenges, and Google.org launched a $25 million open call to support AI solutions for the United Nations Sustainable Development Goals (SDGs). At the same time, governments and international bodies have embraced AI as a means to improve public services. The United Nations’ ITU now convenes an annual AI for Good Global Summit, positioning AI as a tool to “solve global challenges” in partnership with over 40 UN agencies. Academic analyses confirm the trend: McKinsey & Company found that between 2018 and 2023 the catalog of high-potential “AI for social good” use cases grew from about 170 to roughly 600, with over 80% of these use cases already implemented in at least pilot form. In short, the landscape of AI applications has shifted to actively include social innovation alongside commercial innovation.
Despite the acceleration of commercial AI deployment, these public-interest initiatives remind us that technology’s most profound value can lie in advancing societal well-being. The following sections of this paper provide a comprehensive review of nonprofit and public-sector AI applications across key domains: healthcare, education, climate and environment, humanitarian aid, and disaster response. Each section highlights representative projects and partnerships from the past five years, drawing on case studies and empirical outcomes reported in diverse regions. By focusing on measurable impacts – lives saved, diseases detected, students reached, resources conserved – we illustrate how AI technologies are being leveraged beyond profit, for humanitarian and developmental gains. We also incorporate insights from stakeholders (project leaders, partner organizations, and affected communities) through direct quotes and public statements, to understand their perspectives on AI’s social benefits and remaining challenges. In doing so, we maintain a balanced global perspective: our analysis covers initiatives from Asia, Africa, the Americas, and Europe, reflecting a range of socioeconomic settings.
The structure of this paper is as follows. First, we present a Literature Review of the emerging field of AI for social good, summarizing current research, frameworks, and documented trends in using AI to achieve public interest objectives. Next, we outline our Methodology, explaining how we selected and evaluated sources, and our criteria for highlighting certain projects as case studies. We then delve into each thematic sector in turn – Healthcare, Education, Climate Change and Environment, Humanitarian Aid, and Disaster Response – providing an in-depth look at exemplary AI applications in each area. (Notably, there is some overlap between humanitarian and disaster-related use cases; for clarity, we discuss them in an integrated manner while noting the unique context of each.) We also include a section on AI for Cultural Heritage and Inclusion, recognizing that societal impact extends to preserving culture and empowering marginalized groups. After presenting these case studies and thematic analyses, we offer a Discussion that synthesizes cross-cutting insights, such as common enablers of success, global collaboration models, and ethical considerations (e.g. fairness, transparency, and data privacy) that arise when deploying AI for public benefit. Finally, we conclude with reflections on the implications of these findings and recommendations for sustaining and scaling AI-for-good initiatives in the future.
By examining these initiatives in a formal academic context, this paper seeks to contribute to the understanding of how AI can be directed beyond profit motives to produce concrete social value. The evidence gathered paints an optimistic yet nuanced picture: AI is not a panacea, but when applied thoughtfully to public interest domains, it has become a catalyst for innovative solutions to longstanding problems. In the midst of global crises and inequalities, the case studies herein demonstrate that there is hope – AI is being harnessed around the world to transform society for the better, driven by a vision of technology as a servant of humanity’s collective well-being rather than just a driver of private profit.
Literature Review
The concept of “AI for social good” (AI4SG) has emerged as a significant interdisciplinary focus, bridging computer science, social sector practice, and public policy. Over the past half-decade, a growing body of literature and convenings has explored how AI techniques can address the United Nations Sustainable Development Goals and other humanitarian challenges. The United Nations has actively championed this agenda: the ITU’s AI for Good initiative, for example, is described as the UN’s leading platform for identifying innovative AI applications to solve global challenges, building the partnerships and standards needed to apply AI in education, health, sustainability, and more. In partnership with dozens of UN agencies and NGOs, the AI for Good Global Summit has, since 2017, showcased solutions ranging from disaster prediction to AI-driven agriculture. This reflects a broader institutional recognition that AI can be a powerful enabler for sustainable development.
Academic and industry research reinforces the potential of AI to contribute to public-interest goals. A 2024 McKinsey report found that AI applications now exist for all 17 SDGs – encompassing domains such as poverty alleviation, quality education, good health, climate action, and beyond – and noted that recent advances (especially in generative AI) are opening new possibilities for social impact. The report highlights how, compared to five years prior, the “universe of problems that AI may be able to address” has expanded substantially, thanks to improved algorithms and greater availability of data and computing power. Back in 2018, McKinsey’s initial scan identified about 170 AI use cases with potential social benefits, but many were still at concept or pilot stage. As of 2023, roughly 600 AI-enabled use cases supporting the SDGs have been documented, and more than 80% of those had been deployed in at least one real-world instance. This threefold increase in implemented “AI for good” solutions within just a few years underscores how rapidly the field is maturing. Researchers attribute this growth to both technological progress and a surge in multi-sector support – including targeted funding, open research collaborations, and the formation of interdisciplinary teams focused on societal challenges (Bankhwal et al., 2024).
Crucially, literature emphasizes that AI is not a magic bullet for social problems. Multiple sources stress the importance of ethical guardrails and community involvement when deploying AI in sensitive contexts (Floridi et al., 2020; Vinuesa et al., 2020). The McKinsey (2024) study, for instance, notes that while AI offers transformative capabilities (from natural language processing to image recognition) for social good, these must be harnessed in trusted and responsible ways, with active risk management to avoid unintended harms. Key risks highlighted include biases in AI models that could exacerbate inequalities, privacy concerns when handling personal or sensitive data, and the need for transparency so that AI decisions can be understood and challenged (Whittlestone et al., 2019). In low-income or marginalized communities, additional challenges such as lack of digital infrastructure, limited local AI expertise, and potential skepticism or cultural barriers to new technologies have been noted (Sengupta et al., 2023). Thus, a recurring theme is that human oversight and ethical design must accompany AI deployments in the public sector – a point often summed up by practitioners as ensuring AI is used “with a strong human element supporting it”.
Notwithstanding these challenges, case studies documented in the literature provide encouraging evidence that AI can yield measurable social impact when applied thoughtfully. For example, studies in global health have shown that AI-driven diagnostic tools can match or exceed expert clinicians in detecting diseases like tuberculosis on medical images, offering hope for regions with severe doctor shortages. In education, controlled trials have begun to demonstrate improvements in student learning outcomes through AI-powered tutoring systems in under-resourced schools (De Simone et al., 2025). Environmental research shows AI models can improve climate predictions and natural resource management, potentially saving lives and preserving ecosystems (Rolnick et al., 2019). Moreover, the scale and speed at which AI operates allow it to tackle problems in ways traditional methods cannot – processing satellite data for disaster mapping in hours instead of weeks, or translating content into hundreds of languages instantaneously. As one United Nations World Food Programme report put it, AI “can save humanitarians time and money, meaning that every dollar spent has that extra reach and ability to make more of an impact”.
Another strand of relevant literature covers the role of funding and partnerships in advancing AI for public interest. The late 2010s and early 2020s saw the launch of several high-profile philanthropic and government programs dedicated to AI solutions for society. Microsoft’s AI for Good initiative (encompassing sub-programs in Earth, Health, Accessibility, Humanitarian Action, and Cultural Heritage) is one example frequently cited in case studies. Since 2017, Microsoft has committed over $165 million in grants and technology contributions to these efforts (Smith, 2019), enabling hundreds of projects worldwide. By 2020, its AI for Health program alone had partnered with over 200 grantees to address health inequities and advance medical research. Likewise, Google established an AI for Social Good program and, in 2019, held the Google AI Impact Challenge, awarding $25 million to support 20 nonprofit and research organizations using AI for issues like environmental conservation, healthcare access, and education. Follow-up reports indicate that many of those grantees have since delivered concrete results (Google, 2020). Beyond big tech, entities like the Global Partnership on AI (GPAI) – a consortium of governments and experts – and numerous academic centers (e.g., the Stanford Institute for Human-Centered AI, Montreal’s MILA with its AI for Humanity mission) have emerged to guide and fund socially beneficial AI research. These efforts highlight that broad coalitions of stakeholders are considered vital: no single sector can achieve AI’s promise for society in isolation. Public-private partnerships, in particular, are often singled out as effective mechanisms, combining the technical prowess and resources of the private sector with the on-the-ground knowledge and mandate of public agencies or NGOs (Floridi et al., 2020).
In summary, the literature portrays a field that is dynamic and interdisciplinary, blending optimism about AI’s capabilities with realism about implementation challenges. There is a clear consensus that AI has the potential to significantly advance social good in diverse areas – indeed, it is already being used to further all SDGs from eliminating hunger to improving education – but it must be approached with care. Responsible AI principles (transparency, fairness, accountability) and community engagement are frequently cited as prerequisites for success. As we turn to the specific sectoral case studies, these themes from the literature will be evident: the most impactful initiatives are those that combine cutting-edge technology with human-centered design, strong cross-sector collaboration, and a commitment to ethics and inclusivity. The following sections provide a closer look at how these principles are put into practice, and the real-world outcomes that have been achieved, in health, education, climate, and humanitarian contexts.
Methodology
This research employs a qualitative case study methodology, underpinned by an extensive review of secondary sources including academic publications, technical reports, organizational press releases, and news articles from 2020 to 2025. Our goal was to gather verifiable evidence of AI initiatives that were expressly oriented toward nonprofit or public-interest objectives, and to analyze their design, implementation, and impact. We began by surveying the scholarly literature on AI for social good (as reflected in the Literature Review above) to identify broad domains and success factors. We then conducted targeted searches for documented examples of AI applications in various sectors (healthcare, education, climate/environment, humanitarian aid, disaster response, cultural heritage, etc.), emphasizing recent developments and major projects with measurable results. Key search terms included combinations of “AI” with terms like “nonprofit,” “social good,” “health outcomes,” “education improvement,” “climate change AI,” “humanitarian AI,” and specific keywords for known initiatives (e.g., flood forecasting AI, tuberculosis AI tool, AI education pilot Africa, etc.). These searches were supplemented by reviewing conference proceedings (such as AI for Good Summit materials), reports from international agencies, and the corporate responsibility sections of leading technology companies.
From an initial pool of dozens of potential case studies, we applied the following inclusion criteria: (1) the initiative had to have a clear public-interest mission (e.g. improving health, education, environment, or welfare, rather than increasing profit or customer engagement for a private product); (2) it had to have been active or achieved significant milestones in roughly the last five years (circa 2019–2024); and (3) there needed to be documented evidence of outcomes or impact (quantitative or qualitative) in sources we could cite. We gave preference to projects backed by major investments or partnerships (indicating scale and commitment) and to those covering a range of geographic contexts for a balanced global perspective. In many cases, we relied on reports by reputable organizations (UN agencies, governments, well-known NGOs) or peer-reviewed studies for data on impact. When using press releases or news articles, we cross-verified details where possible (for instance, matching a quote or statistic from a news piece with data from an official report).
Each sector-focused section in this paper synthesizes information from multiple sources to provide a cohesive narrative of how AI is being applied in that domain. Direct quotations from stakeholders (project leaders, beneficiaries, etc.) were included to enrich the analysis; these were drawn from interviews and statements reported in media or organizational blogs. We approached these quotes critically, considering the speaker’s perspective and potential bias (e.g., a tech company executive’s optimistic framing of their philanthropic program, or a UN official’s advocacy for AI). Nonetheless, such firsthand statements are valuable for understanding the motivations and perceived significance of the initiatives.
It should be noted that our methodology is inherently subject to the limitations of available documentation. Many AI-for-good projects are ongoing, and comprehensive evaluations of their long-term impact may not yet exist in published form. In addition, there may be regional or grassroots AI initiatives with great impact that are underreported in English-language literature, which our search could have missed. We attempted to mitigate this by including a diversity of sources and by not limiting our search to academic papers alone (given that important information often resides in policy briefs, white papers, or online announcements in this fast-moving field). The Discussion section of this paper will reflect on these limitations and the need for further research. Despite these challenges, the methodology allowed us to aggregate a robust set of examples that illustrate the core thesis of this paper: that AI, when applied through nonprofit and public-oriented efforts, is already transforming lives and communities around the world. Each case provides insight into how hope in technology is being realized on the ground, beyond the boundaries of commercial use.
AI in Healthcare and Medicine
Healthcare has been one of the most impactful domains for public-interest AI deployment, with efforts focused on improving diagnostic capabilities, expanding healthcare access in underserved regions, and accelerating medical research. In the last five years, AI-driven tools have been introduced to support front-line health workers and patients, often in areas where specialized medical expertise is scarce. These initiatives, typically led by global health organizations, research institutes, or social enterprises, aim to reduce health inequities and save lives – rather than to generate profit. Below, we examine several key areas where nonprofit and public-sector AI in healthcare has made a difference, including disease screening and diagnosis, pandemic response, and biomedical research.
Augmenting Diagnostics in Underserved Areas: One of the clearest examples of AI’s social impact in healthcare is in the screening for diseases like tuberculosis (TB). TB remains a major killer in many low- and middle-income countries, yet diagnosing TB often requires skilled radiologists to interpret chest X-rays – a resource that is in short supply in high-burden regions. In 2021, the World Health Organization officially endorsed the use of AI-powered computer-aided detection (CAD) software for TB screening via chest X-rays, after studies showed that certain AI algorithms could detect TB as accurately as human experts. This WHO recommendation opened the door for widespread adoption of AI tools in national TB programs. A number of nonprofit and public-private initiatives have since accelerated implementation. For instance, the India-based startup Qure.ai (in partnership with NGOs and governments) deployed its AI TB screening tool in rural clinics across South Asia and Africa, enabling thousands of X-ray scans to be read in seconds, flagging likely TB cases for further testing. Likewise, Delft Imaging’s CAD4TB software (backed by the Dutch government and NGOs) has been integrated into mobile screening units in sub-Saharan Africa to identify TB in vulnerable populations like refugees.
A breakthrough in 2023 combined portable X-ray devices with AI diagnostics to reach remote communities. Unitaid (a global health agency) and the Clinton Health Access Initiative announced an ultra-portable, battery-operated digital X-ray system that is compatible with AI detection software for TB, weighing only ~5 kg and designed for use in hard-to-reach areas. This innovation means health workers can literally carry an X-ray machine in a backpack to villages that have never had such diagnostic services. The AI software analyzes the images on the spot to detect signs of TB with accuracy comparable to an expert radiologist. “This innovation will help bring expert-level TB screening closer to the people and communities most affected by the disease, where health facilities are often out of reach,” said Dr. Philippe Duneton, Executive Director of Unitaid. The project significantly lowered the cost of such portable X-ray systems (through a pricing agreement with the manufacturer), making them affordable to public health programs in 138 low- and middle-income countries. In Dr. Duneton’s words, “By making this technology more affordable and accessible, we are not only helping countries reach further with TB care, we are also reinforcing health systems to respond to lung disease more broadly.”. Early deployments in countries like Vietnam, Kenya, and South Africa have shown promising results in finding previously missed TB cases. Similarly, Dr. Neil Buddy Shah, CEO of CHAI, highlighted the broader significance: “This agreement shows what’s possible when we make life-saving technology affordable for those who need it most… By dramatically lowering the price of these portable X-ray systems, we’re bringing a breakthrough solution directly to communities… It’s a powerful example of how smart market solutions can remove barriers to care and save lives today”.
Beyond TB, AI is also helping to screen for other diseases in low-resource settings. One notable nonprofit is RAD-AID International, which focuses on improving radiology services in underserved regions. In 2022, RAD-AID received support from Google.org’s Global Goals initiative to implement AI-based tools for interpreting medical imaging at hospitals in 20 countries (primarily across Africa, Asia, and Latin America). The AI systems assist with reading chest X-rays and lung CT scans to triage patients with respiratory diseases (including pneumonia or COVID-19) and with mammography to detect early signs of breast cancer. By augmenting the capacity of a limited health workforce, this project is expected to benefit over 30 million patients by ensuring timely diagnosis and communication of test results. Preliminary reports from pilot sites in Ghana and Kenya indicate that radiologists and general practitioners have welcomed the AI support: it helps prioritize urgent cases and catches subtleties that might be overlooked under heavy workloads. Importantly, RAD-AID is also training local healthcare staff in how to interpret AI outputs and maintain the systems, building sustainable capacity.
Maternal and Child Health: AI is being applied to make pregnancy and childbirth safer in resource-limited areas. In Guatemala, an NGO called Wuqu’ Kawoq (Maya Health Alliance) has developed an AI-driven toolkit for indigenous midwives to detect early warning signs of fetal distress and pregnancy complications. Guatemala has one of the highest neonatal mortality rates in Latin America, especially among its indigenous Maya communities who often lack access to advanced medical monitoring. The toolkit, named Safe@Natal, consists of a low-cost ultrasound device and a digital blood pressure monitor that connect to a smartphone app. Machine learning models analyze the ultrasound images and vital signs in real time to identify issues like abnormal fetal heart rates or obstructed labor that might require urgent referral to a hospital. By giving midwives in rural areas a decision-support tool, the project aims to eliminate preventable newborn deaths and maternal complications. This initiative, supported by university researchers and funded in part by philanthropic grants, has reported anecdotal successes: in early trials, midwives were able to flag high-risk pregnancies that would have gone unnoticed before, leading to timely transfers of mothers to higher-level care. Such AI interventions complement traditional knowledge with modern analytics, in a culturally sensitive way (the app’s interface and training were developed in the Maya Kaqchikel language).
Another example comes from East Africa. In Uganda, the Makerere University AI Lab is addressing diagnostic gaps by developing a 3D-printed smartphone adapter for microscopy that uses AI to analyze blood and sputum samples for diseases like malaria, TB, and cervical cancer. In many clinics, especially in rural Africa, microscopes might be available but trained lab technicians are not. Makerere’s solution allows a simple attachment to a phone to capture microscope images; an AI model then identifies malaria parasites or TB bacteria in the images, or detects precancerous cells in a Pap smear, with high accuracy. The system can reduce diagnosis time by 25% or more, meaning patients get results and treatment faster. By turning a smartphone into an “AI lab technician,” this innovation is expected to help overburdened health facilities manage patient loads and improve diagnostic reach. The project has received support from global health funders and intends to open-source the designs, so that any clinic with a 3D printer and basic equipment can fabricate the adapter and use the AI software.
Pandemic Response and Public Health: The COVID-19 pandemic (2020–21) further catalyzed nonprofit AI efforts in health. AI was used by public health agencies to do everything from predicting outbreak spreads to assisting with vaccine and drug development. For instance, early in the pandemic an AI platform developed by BlueDot (a Canadian health analytics company) reportedly detected unusual pneumonia cases in Wuhan by mining news and airline data, giving some of the first alerts of what became COVID-19. Once the pandemic hit, researchers worldwide employed machine learning to forecast case surges and optimize the allocation of limited resources like ICU beds and ventilators. In an open science effort, the COVID Moonshot project used AI-driven molecular modeling (with no commercial ownership) to search for antiviral drug candidates, demonstrating how collaborative AI can accelerate discovery for the public good. While these efforts were not all led by traditional nonprofits, they often had public-interest motivations and operated in transparency, sharing data and findings freely to help humanity combat a global threat.
A concrete outcome of pandemic-era AI innovation is the Health Equity Tasker and Dashboard launched by the Novartis Foundation in partnership with Microsoft’s AI for Health program and local governments. Debuting in 2021, this AI system analyzes a city’s health, socioeconomic, and environmental data to identify neighborhoods at highest risk during health crises, thereby guiding equitable distribution of interventions (such as testing sites or vaccine clinics). For example, in New York City the AI4HealthyCities collaboration used AI to uncover cardiovascular risk factor hotspots by integrating datasets that were previously siloed. Insights from the tool helped city public health officials direct community health workers and resources to areas with disproportionate hypertension and heart disease burdens. By 2022, similar approaches were being piloted in urban areas of Brazil and Southeast Asia, illustrating the portability of AI solutions for urban health equity.
Accelerating Medical Research: AI’s contributions to healthcare are not limited to frontline service delivery; they also extend to biomedical research for the public good. One landmark achievement was the development of AlphaFold by DeepMind (a research lab owned by Alphabet/Google). AlphaFold is an AI system capable of predicting protein structures with remarkable accuracy – a breakthrough that can spur drug discovery and understanding of diseases. In July 2021, DeepMind and its partners made AlphaFold’s predictions for the entire human proteome (and those of 20+ other organisms) freely available in a public database. This act of openly sharing AI research output, rather than monetizing it, was widely hailed as a huge boon for scientists everywhere. By 2023, Google reported that the AlphaFold database had been accessed by over 2 million researchers in more than 190 countries, 30% of whom are in developing countries. Sundar Pichai noted in 2024 that “globally, AlphaFold is being used in research that could help make crops more resistant to disease, discover new medicines,” among other applications. Indeed, researchers in South Africa have used AlphaFold data to study variants of tuberculosis bacteria in pursuit of better treatments, and agricultural scientists in India are exploring crop protein changes to breed climate-resilient rice varieties. This demonstrates how a cutting-edge AI, backed by major corporate investment, was deployed with a public-good approach (open access), catalyzing scientific innovation far beyond corporate walls.
Similarly, AI is aiding clinical research on diseases that disproportionately affect the poor. For example, in 2022 a collaborative team from Google Research and academic partners in India and South Africa developed an AI model to interpret chest X-rays for TB (mentioned earlier) and published their findings in Radiology. By proving that AI could match expert radiologists in detecting TB, their study supported WHO’s policy and provided an open benchmark for others to improve upon. Another case is the use of AI in malaria research: nonprofit research institutes have applied machine learning to genomic data of malaria parasites to predict drug resistance patterns, helping public health officials in Africa to update treatment guidelines more quickly and accurately.
In aggregate, the examples above illustrate a multifaceted impact of AI in healthcare when guided by public interest aims. From remote villages to urban hospitals, AI tools are extending the reach and quality of care, often compensating for resource gaps. Importantly, these interventions are typically carried out in partnership with local health ministries, NGOs, and community health workers to ensure they address real needs and are culturally appropriate. Many are funded by philanthropic grants or international aid, highlighting the role of major investments in scaling health AI for good. Microsoft’s AI for Health initiative (launched in January 2020 with a $60 million pledge) has supported over 200 such projects globally, focusing on issues like maternal mortality, pandemic modeling, and health equity mapping. These philanthropic efforts provide cloud computing credits, AI expertise, and cash grants to nonprofits and researchers – resources that would be hard to obtain otherwise – thereby enabling innovation in settings that the commercial market might overlook.
Equally critical is the focus on measuring impact. Health-focused AI projects often track metrics such as number of cases detected, reduction in diagnostic delay, treatment adherence improvements, or lives saved. For example, preliminary data from flood of AI-based TB screening programs indicate increased case detection rates (in some pilots, 15–20% more TB cases were identified compared to standard screening, meaning those individuals could be treated and cured, preventing further transmission). In breast cancer, early results from AI-enhanced screening in rural clinics (e.g., in Mexico with support from the MIT Jameel Clinic and local NGOs) show higher referral and early detection rates, which should translate to better survival once long-term outcomes are assessed. These measurable benefits reinforce the argument that AI can be a force-multiplier in global health – not replacing healthcare professionals, but augmenting their capabilities and focusing attention where it’s needed most. A quote from a physician in Zambia who used an AI TB tool encapsulates this: “The AI doesn’t get tired or biased. It’s like having an ever-vigilant colleague who helps me not miss a diagnosis. In the end, it means more patients get the care they need” (Doctors Without Borders, 2022 report).
In summary, nonprofit and public-sector applications of AI in healthcare have begun to show life-saving impacts in the past five years. They range from high-tech innovations like AlphaFold that accelerate research globally, to pragmatic tools like diagnostic algorithms on portable devices that empower health workers in remote areas. These initiatives prioritize improved patient outcomes and health equity over financial returns. They are backed by significant collaborations – often a mix of tech companies contributing expertise, NGOs providing ground presence, and governments enabling integration into health systems – illustrating the ecosystem required for AI in health to thrive. The next challenge is scaling these successes further, integrating AI tools sustainably into health systems, and continuing to evaluate them rigorously. Nonetheless, the trajectory is positive: AI is helping doctors and nurses deliver better care in places that need it most, and tackling diseases that have long plagued humanity. As we move forward, the lessons from these health projects (particularly regarding ethics, community trust, and capacity building) can inform AI-for-good efforts in other sectors, such as education, which we turn to next.
AI in Education and Learning
Education is another sector where AI has begun to make inroads in the public interest, with the aim of enhancing learning outcomes, personalizing instruction, and expanding access to quality education for underserved communities. Over the last five years, several pilot programs and initiatives have demonstrated that AI-powered educational tools – from intelligent tutoring systems to automated content creation – can benefit students and teachers, particularly in regions with teacher shortages or significant educational disparities. Unlike commercial e-learning products targeting affluent markets, the efforts highlighted here are typically led by nonprofits, universities, or government partnerships and focus on low-income or marginalized learners. They strive to ensure that AI helps bridge gaps in education, not widen them.
Intelligent Tutoring and After-School Programs: A landmark study in 2024 provided some of the first rigorous evidence of AI’s impact in a developing country classroom. The World Bank, along with local partners in Nigeria, conducted a pilot project using a generative AI tutor in an after-school program for secondary students in Edo State (De Simone et al., 2025). The AI system – essentially a chatbot powered by a large language model – was designed to support students in practicing English language skills and other subjects by engaging them in interactive dialogues, answering questions, and providing feedback. Trained facilitators were present to guide the AI sessions and help integrate them with the curriculum. The results of a randomized controlled trial were striking: students who participated in the AI-assisted program for 6 weeks showed significantly higher gains in learning than those who did not. On post-intervention tests, the AI group outperformed the control group in English (the primary focus) as well as in general digital skills and even in other academic subjects on their year-end exams. Notably, the benefits were broad-based – the AI tutor helped not just the top students but also those who were initially struggling. In fact, the gender gap in performance narrowed: “Girls, who were initially lagging boys in performance, seemed to gain even more,” the project report noted, suggesting that the AI provided a supportive learning environment that helped female students catch up. One student participant, Omorogbe “Uyi” Uyiosa, described his experience: “AI helps us to learn, it can serve as a tutor, it can be anything you want it to be, depending on the prompt you write,” Uyi said, reflecting excitement at the new mode of learning. This direct testimony from a learner underscores how novel and empowering the experience was – the AI could be asked endless questions without judgment or fatigue, complementing the human teachers who were often overextended in large classes.
The success of the Nigeria pilot – which was one of the first of its kind in sub-Saharan Africa – has inspired scaling plans. Education authorities are considering expanding the after-school AI program to more schools in Edo State and beyond, and the World Bank is distilling lessons for other countries. The project highlighted that implementation matters: the AI tutor was most effective when integrated with human facilitation and when students were trained on how to interact with it (prompting, asking effective questions, etc.). It wasn’t a plug-and-play solution, but when thoughtfully applied, it proved AI can act as a low-cost “personal tutor” for students who otherwise have little individualized support. In regions where teacher-to-student ratios are very high and tutoring is unavailable to most, such AI systems have the potential to democratize some of the benefits of personal tutoring. Furthermore, because the AI was accessible on basic computers and worked offline after initial setup, it was feasible even in contexts with limited internet – a critical consideration for equity.
Personalized and Early Childhood Learning: AI is also being leveraged to tackle disparities in early childhood education. In India, for example, the Rocket Learning Foundation, a nonprofit focused on early education for low-income families, is developing an AI-enabled learning platform for preschool-aged children and their parents. Across India, an estimated 35 million young children lack access to quality early childhood education, which leads to them starting primary school at a disadvantage in literacy and numeracy. Rocket Learning’s approach mobilizes communities through WhatsApp and basic smartphones, sending educational content to parents and training them as first teachers. Now, with support from Google.org’s AI for the Global Goals program, Rocket Learning is integrating generative AI models to create localized academic content automatically and to provide an AI-based coaching chatbot for parents and educators. The AI coach can answer parents’ questions (in local languages) about how to teach alphabet letters or do simple math activities at home, and it can analyze photos or recordings of a child’s work to give tailored feedback or suggestions. Additionally, the system uses machine learning to personalize learning pathways for each child – for instance, if a child has mastered basic counting, the app will suggest the next challenges, whereas if a child is struggling with a concept, it will offer more practice in that area. By automating content generation, the initiative addresses the shortage of relevant early education material in many Indian languages and contexts. The envisioned impact is to increase school readiness among tens of thousands of underprivileged children, thereby reducing the achievement gap. While still in development, small-scale trials in 2023 showed promising engagement: parents who used the AI-assisted content reported spending more time on learning activities with their children and observed improvements in children’s recognition of letters and numbers (Rocket Learning, 2024 progress report).
In East Africa, a social enterprise named EIDU is working in Kenya to bring adaptive learning software to low-income schools and offline environments. EIDU’s platform, which runs on low-cost Android smartphones and tablets, offers interactive learning exercises for early primary grades. With the help of an AI engine, it can function as an “auto-tutor” even without continuous internet. In many Kenyan schools, especially in slum areas or remote counties, student populations are large and resources are thin; EIDU provides digital lesson plans to teachers and practice exercises directly to children. Supported by grants (including from Google’s AI for Social Good funding), EIDU plans to reach up to 2 million pre-primary and primary students in Kenya with personalized, curriculum-aligned learning content – all accessible offline for communities with limited connectivity. The AI adapts to each student’s level: for example, if a child consistently struggles with a type of math problem, the app will give additional practice problems of that type and maybe offer hints or simpler sub-steps to guide the child. If a student is excelling, it might introduce more challenging tasks to keep them engaged. Early pilots have demonstrated improvements in foundational numeracy for participants, and teachers have appreciated the detailed analytics the system provides (e.g., which questions most students found difficult, allowing the teacher to address that topic in class). By empowering teachers with data and providing students with self-paced practice, AI is effectively enhancing the learning process in these settings.
Language and Accessibility Innovations: Another facet of AI in education is breaking down language barriers and assisting students with disabilities. AI-based translation and speech recognition have made significant strides, and these are being applied in educational contexts to promote inclusion. In 2022, for example, Google Translate added support for 24 new languages – many of them African and South Asian languages spoken by tens of millions of people but historically underrepresented in digital resources. This was made possible by advances in AI language models. Such expansions mean that students and teachers can now access educational content (Wikipedia articles, online courses, etc.) in their native languages, or translate curricula between languages. Pichai noted that Google was working toward supporting 1,000 languages, which could dramatically increase access to knowledge in local languages worldwide. Already, teachers in rural Indonesia (for instance) have used AI translation to obtain science materials in Indonesian that were originally only available in English, making it easier for them to teach complex topics to their students. This kind of application, while not tied to a single nonprofit project, exemplifies how AI research breakthroughs (multilingual models) are yielding public goods that educators can freely leverage.
For students with disabilities, AI offers new tools for inclusion. Microsoft’s AI for Accessibility program (launched 2018) has funded projects like Seeing AI (for the visually impaired) and Helpicto (which converts voice to pictograms for people with autism). In education specifically, one noteworthy project from 2023 is at the University of Surrey in the UK, where researchers – supported by philanthropic grants – are using AI to generate photorealistic sign language videos on demand. The goal is to help the Deaf community access educational and public information in their first language (sign language) more easily and affordably. Currently, translating textbooks or lesson content into sign language requires human signers and video recording, which is costly and not scalable (an estimated 80% of profoundly deaf people worldwide do not have sufficient access to sign language interpreters). The Surrey project’s AI system can take written text and produce a video avatar “signing” the content in British Sign Language or American Sign Language. With further development, an educator could input a lesson or a homework problem and immediately get a sign language version for deaf students, which would be revolutionary for inclusive education. The project’s initial focus is on the UK and US, aiming to benefit about 600,000 Deaf individuals for whom sign is the first language, with potential to expand to other sign languages globally. This exemplifies AI’s power to make education accessible for learners with special needs – an area often underserved by mainstream ed-tech.
Government and System-Level Use: Some governments are beginning to incorporate AI to improve educational planning and quality assurance in public school systems. For example, during the COVID-19 school closures, some education ministries used AI analytics on student engagement data to identify which regions or demographics were at risk of falling behind, allowing targeted interventions (like deploying radio instruction or printed packets to areas where online uptake was low). In Uruguay, the national “Plan Ceibal” education initiative used AI to match students with remote tutors during the pandemic, ensuring that struggling students got personalized help. In China (outside the scope of nonprofits but notable), several provincial public school systems implemented AI homework grading and feedback systems to reduce teachers’ workload and provide students with instant feedback – though the Chinese experience also raised debates about data privacy and over-reliance on algorithms in education. These examples underscore that while AI holds promise, careful policy and ethical consideration is needed; for instance, data collected on students must be handled with privacy safeguards, and the role of teachers remains irreplaceable as orchestrators of learning.
The early outcomes from these education initiatives are heartening. They indicate that AI can help deliver education in more equitable and effective ways. However, it’s worth noting challenges observed. One challenge is the digital divide – many of these solutions presume a minimum level of device availability and electrical power. Efforts like EIDU’s focus on low-cost devices and offline functionality are directly addressing this, but scaling to every low-resource school will require investment in basic infrastructure as well. Another challenge is teacher training and buy-in: teachers may fear AI will replace them or may not trust the tool’s recommendations. The Nigeria project mitigated this by involving teachers in the process and framing the AI as a support tool; as a result, teachers reported positive experiences, seeing AI as a way to engage students more (De Simone et al., 2025). Ensuring that educators are partners in AI integration is critical for long-term success.
In conclusion, AI applications in education, driven by public interest objectives, are expanding opportunities for learners who might otherwise be left behind. From sub-Saharan African high schoolers gaining an AI study buddy, to South Asian preschoolers receiving AI-tailored early lessons, to Deaf students accessing materials via AI-generated sign language – the common theme is using technology to promote inclusion and quality in learning. These projects remain in relatively early stages compared to the global scale of educational need, but they provide proof-of-concept that with smart implementation, AI can be a leveler in education. As one education expert remarked at the AI for Good Summit, “An AI tutor that costs nothing to replicate can, in theory, give every child in the world a personal learning experience once infrastructure catches up” (AI for Good, 2023 panel). The next sections will examine how similar principles are being applied in domains like climate action and disaster response, where AI’s pattern recognition and predictive powers are being harnessed for the public good.
AI for Climate Change and Environmental Sustainability
Climate change presents an existential challenge to humanity, and it is an area where AI has been increasingly deployed for public-benefit initiatives. In the past five years, a range of projects has shown how AI can improve climate modeling, enhance environmental monitoring, and support climate adaptation efforts – often in ways that directly aid vulnerable communities. These initiatives, backed by research institutions, nonprofits, and forward-looking government agencies, emphasize using AI to protect the planet and build resilience, rather than for commercial gain. Here we explore examples of how AI is contributing to climate action: from more accurate weather forecasts that save lives, to management of natural resources and biodiversity, to reduction of emissions and better disaster preparedness.
Advanced Climate and Weather Forecasting: One of the most significant applications of AI in the climate domain is improving the accuracy and lead time of weather forecasts, especially for extreme events like floods, storms, and droughts. Traditional physics-based forecasting models require enormous supercomputing resources and still have limits in resolution and timeliness. In a groundbreaking collaboration launched in East Africa in 2024, the United Nations World Food Programme (WFP) partnered with the University of Oxford and the IGAD Climate Prediction and Applications Center (ICPAC) (the regional climate center for East Africa) to integrate an AI-based weather model into regional early warning systems. This initiative, supported by funding and cloud computing resources from Google.org, aims to revolutionize forecasting for a region that faces both severe droughts and devastating floods due to climate change.
Oxford’s team developed a deep learning model for rainfall prediction that can generate high-resolution forecasts up to 7 days in advance without needing the traditional computational infrastructure that many African meteorological agencies lack. The AI model was trained on historical weather data and physics-model outputs, learning to predict precipitation patterns with fine spatial detail. “We believe the approach we have pioneered is a game-changer for parts of the world which have previously suffered from a lack of resources and infrastructure but nonetheless find themselves bearing the brunt of climate change,” said Dr. Shruti Nath, a climate scientist at Oxford, who helped develop the model. In trials, the AI system demonstrated significant improvements in forecast accuracy for local rainfall events compared to existing methods. National meteorological agencies in Kenya and Ethiopia began testing the AI forecasts alongside their operational ones and reported that the AI often better pinpointed heavy rain occurrences that the conventional models missed. Hannah Wangari, Assistant Director at the Kenya Meteorological Department, remarked that the results “are promising and demonstrate significant improvements in accuracy” when comparing the AI model to current tools.
The impact of this is enormous for anticipatory humanitarian action. With more precise and earlier warnings of extreme weather, WFP and governments can initiate early interventions: for example, releasing funds to communities ahead of an expected flood or drought (a strategy known as forecast-based financing), pre-positioning relief supplies, or helping farmers move livestock to safer ground. Jesse Mason, WFP’s global lead for Anticipatory Action, emphasized this proactive shift: “The World Food Programme has realized that we need to start protecting lives before they need saving… making sure that governments and communities have the tools to prepare and mitigate impacts ahead of extreme events”. By July 2024, this AI-augmented system was credited with enabling earlier flood advisories in South Sudan and anticipatory cash disbursements to at-risk families in Kenya when forecasts showed a high probability of a poor rainy season. Mason expressed his optimism, saying “I think we have the potential to change the world” with such collaborations. It’s a potent example of public-private-academic partnership: WFP brings ground knowledge and the mandate to protect communities, Oxford provides cutting-edge AI research, and Google provides funding and cloud computing (covering computational needs that local agencies would struggle with). The success in East Africa is already serving as a model for other regions – plans are underway to replicate this approach in parts of Asia and Latin America that face similar forecast challenges. This case shows that AI, when applied to climate science, can directly translate to more lives saved and livelihoods protected, essentially by buying time and accuracy in early warning.
In a related vein, flood forecasting has been dramatically improved by AI on a global scale. Google’s Flood Forecasting Initiative, which started as a pilot in India in 2018, has by 2023–2024 expanded to cover over 80 countries across South Asia, Africa, and Latin America. Google’s AI models use a combination of meteorological data and hydrological simulations to predict riverine floods and issue warnings up to 7 days in advance. Historically, many developing countries did not have detailed flood forecasting, or it was available only a day or two before a flood. By leveraging AI, Google managed to increase both the lead time and geographic coverage of flood alerts. As of early 2024, this system (delivered through a platform called FloodHub) provided free real-time flood forecasts to an area covering 460 million people worldwide. In India and Bangladesh alone – where the program first scaled – the system sent out 115 million flood alert notifications in 2021, which was triple the number from the previous year, thanks to model improvements. Sella Nevo, the head of Google’s flood initiative, noted that “forecasting can prevent 30–50 percent of [flood] damage” with timely evacuations and preparations. He reported in late 2021 that Google had delivered over 100 million alerts via mobile notifications to people in harm’s way. These alerts, integrated into widely used platforms like Google Maps and Search for accessibility, have been credited with saving lives – for example, local officials in Bihar, India, stated that they could evacuate villages more efficiently with the longer heads-up, and families in Bangladesh reported moving valuables and livestock to higher ground in response to alerts, reducing losses.
By 2022, Google’s flood forecasting had extended beyond South Asia to 18 additional countries, including several in Africa (e.g., Nigeria, South Africa), Latin America (e.g., Brazil), and Southeast Asia. Google also made FloodHub a public website where anyone can see flood predictions and risk maps for their region. This democratization of data means that even local NGOs or small government offices without sophisticated forecasting tech can access reliable forecasts. It exemplifies how a tech company’s AI research can be steered towards a public service that has no direct profit motive, but huge social value. The project involved partnerships with local water and meteorology agencies (like India’s Central Water Commission and Bangladesh’s Water Development Board) to get river data and validate models. In short, AI is enabling a quantum leap in early warning capabilities for climate-related disasters, and the beneficiaries are among the world’s most vulnerable populations who live in floodplains or drought-prone areas with historically limited warning systems.
Environmental Monitoring and Conservation: AI’s pattern-recognition abilities have been a boon for monitoring the natural environment and biodiversity, which is essential in an era of climate change and habitat loss. A vivid example is Wildlife Insights, a platform launched through a partnership of conservation organizations (Conservation International, WWF, WCS, and others) and Google. Wildlife Insights uses AI to analyze camera trap photos from forests and wildlife reserves around the world. Camera traps – motion-triggered cameras – are a popular tool for conservationists to observe wildlife, but they generate millions of images, often with a large fraction showing nothing (empty forest or just moving leaves). Sorting and identifying animals in these photos used to take researchers months or even years. The AI models in Wildlife Insights can now classify images up to 3,000 times faster than humans, processing about 3.6 million photos per hour. The AI is trained to recognize over 700 species; it first filters out images with no animals (which can be ~80% of captures, triggered by wind or irrelevant movement), and then tags the species present in the rest (e.g., “jaguar,” “elephant,” “empty”). Jorge Ahumada, a conservation scientist involved in the project, explained that this drastically reduces manual labor: “It takes us from spending months going through photos to having results in near-real-time,” allowing timely insights into wildlife populations. Wildlife Insights, since its public launch in 2019, has created the largest publicly accessible collection of wildlife images in the world, with over 100 million images from cameras in Asia, Africa, Latin America, and beyond.
The impact on conservation efforts is tangible. Researchers and park rangers can now use the platform to quickly gauge species abundance and distribution, detect the presence of endangered animals or invading species, and make data-driven decisions about protecting habitats. For example, after the devastating Australian bushfires of 2019–2020, Wildlife Insights helped WWF-Australia process over a half-million camera trap images from burned forests to see which animals were returning and where to focus restoration efforts. By early 2021, more than 600 cameras were deployed in the bushfire zones, and AI was identifying species like koalas, kangaroos, and bush-tailed possums in the photographs, providing hope and guidance on ecosystem recovery. Darren Grover of WWF-Australia noted that having this data gave conservationists a clearer picture of how wildlife was coping and which areas were in need of urgent support, something that wouldn’t have been feasible without AI due to time constraints.
Additionally, the open data approach of Wildlife Insights – where data from various projects is aggregated – helps in the larger fight against climate change. Biodiversity data is critical for climate models and environmental policy (for instance, tropical forests’ health can be assessed partly by wildlife indicators). The AI-enhanced platform “de-silos” such data, enabling meta-analyses and global maps of species distributions that inform climate resilience strategies. Policy-makers can use outputs (like hotspot maps of biodiversity or areas of rapid decline) to designate protected areas or enact conservation measures. By expanding knowledge, AI is indirectly supporting climate action – since preserving ecosystems like forests, wetlands, and grasslands is a key component of climate mitigation (they sequester carbon) and adaptation (they reduce disaster risk).
Speaking of wetlands, another innovative project uses AI to map and monitor these critical but often neglected ecosystems. Wetlands are huge carbon sinks and biodiversity havens, but many remain poorly mapped. In 2023, the International Water Management Institute and DHI (a not-for-profit research outfit) collaborated with UNEP to use open-source satellite imagery and machine learning to create the first high-resolution global map of wetlands. By training AI to recognize wetland characteristics in satellite data, they identified previously uncharted wetlands in five pilot countries and produced a scalable method to update wetland maps regularly. This helps countries include wetlands in their climate plans (as part of nature-based solutions) and track changes due to climate impacts or human encroachment. For example, in Uganda the project discovered small wetlands that local communities rely on for water filtration; now those are being targeted for conservation.
Agriculture and Food Security: AI for climate is also playing a direct role in helping farmers adapt to changing conditions and reduce climate-related losses. A prominent example is the work of Wadhwani AI, a nonprofit research institute in India that focuses on AI solutions for social sectors. Wadhwani AI developed an app for cotton farmers that uses AI to detect pest infestations (such as the pink bollworm) early by analyzing images of pest traps taken with a smartphone. Cotton is a critical crop for millions of smallholder farmers in India, but pest outbreaks exacerbated by changing climate patterns can devastate yields. With the AI app, farmers receive warnings and specific guidance on pest management if the model finds evidence of an infestation. During field trials, this AI-driven approach led to a 20% increase in farmers’ profits and a 25% reduction in pesticide usage (since interventions could be more targeted and timely). This outcome is significant: it shows AI can both improve economic resilience (more income) and have environmental co-benefits (less chemical use). On the back of these results, the Indian Ministry of Agriculture partnered with Wadhwani AI to scale the solution across multiple states and to extend it to 10 staple crops, including rice and wheat, aiming to reach millions of farmers nationwide. The broader vision is that by protecting crop yields from climate-exacerbated pests, hunger and livelihood threats are mitigated, contributing to SDG2: Zero Hunger. If such programs succeed, they could mitigate the threat of food insecurity for potentially billions of people dependent on those critical crops. While “billions” refers to end consumers globally who rely on those food crops, the direct beneficiaries are the farming communities who can better weather climate shocks with AI-guided agricultural practices.
Another agricultural example is the use of AI for climate-resilient breeding of crops. The International Rice Research Institute (IRRI), supported by AI experts, has been leveraging machine learning to analyze its vast seed bank of rice varieties. By correlating genetic data with traits like drought tolerance or flood tolerance, AI models help identify which gene combinations lead to resilience. This speeds up the breeding of rice strains that can withstand the more frequent droughts or floods expected under climate change. In 2023, as part of Google’s AI for Global Goals initiative, IRRI received support to use AI in developing new rice varieties; this ultimately ties into securing food supplies for the future.
Emissions Reduction and Energy: While many of the examples so far focus on adaptation and resilience, AI is also being used in efforts to mitigate climate change by reducing greenhouse gas emissions. For instance, some city governments have employed AI to optimize traffic signals and reduce congestion (hence cutting vehicle idling emissions) – Barcelona’s public transit authority did pilot studies with this. Utilities in the United States and Europe have used AI to forecast energy demand and manage grids more efficiently, allowing a higher penetration of renewable energy (since AI can predict and balance the variability of solar and wind power). Although these latter examples are often within for-profit utilities or city operations, they serve public goals of sustainability. An interesting nonprofit-driven case is Open Climate Fix, a small UK-based nonprofit that built an AI model to forecast short-term solar power output by analyzing satellite images of clouds; the data helps grid operators efficiently use solar energy and reduce reliance on fossil backup, thereby lowering emissions.
Furthermore, AI is aiding climate researchers in crunching data for policy. Climate modeling groups (like those contributing to the IPCC) have started using AI to approximate complex climate model outputs with less computation – so they can run many more simulations (for uncertainty analysis or fine regional detail) than traditionally possible. This contributes to better-informed climate policies globally. For example, in 2021 a team from NVIDIA and the U.S. National Center for Atmospheric Research developed a deep-learning “emulator” that could generate climate extreme statistics at high resolution much faster than a full simulation, allowing developing countries to access localized climate risk information without needing a supercomputer.
Protecting Communities and Ecosystems: A unifying theme across these examples is that AI is helping to protect those most vulnerable to climate and environmental risks. Whether it’s fisherfolk in Bangladesh getting timely flood warnings, pastoralists in Kenya receiving an anticipatory cash transfer before a drought (guided by AI weather forecasts), or indigenous forest guardians in the Amazon using AI-enhanced monitoring to detect illegal deforestation (not discussed above, but such projects exist using AI to analyze satellite images for logging or mining activity), the technology is being directed at safeguarding lives, livelihoods, and critical ecosystems. These efforts often involve empowering local communities with information: e.g., UNICEF’s Giga initiative used AI to map schools via satellite so that governments know where to extend electricity and internet – indirectly this also supports climate initiatives as connected schools can receive climate education and alerts.
The successes are accompanied by new challenges and considerations. One challenge is data availability and bias: AI models are only as good as the data they are trained on, and for many African or Pacific Island nations, historical climate data is sparse. Partnerships like the one with Oxford and WFP help because they merge local data with global datasets, but there is a need to continually improve data collection in the Global South. Another consideration is ethical use of AI in climate adaptation – for example, if AI predicts a certain village is at extreme climate risk, how to ensure that does not stigmatize the village or lead to maladaptive responses (this is where human judgment and policy must guide the use of AI outputs). Additionally, computational resources and expertise are needed to run these models; initiatives like Climate Change AI (a volunteer academic group) and AI for Climate by ITU are focusing on capacity-building so that developing country researchers and practitioners can also harness AI directly, not just rely on external solutions.
Despite these challenges, the trend is clearly toward AI becoming an integral tool in the climate action toolkit. The Executive Secretary of ICPAC, Dr. Mahmoud Moshine, noted in 2024 that “by fostering local ownership and trust, this collaboration [AI forecasting in East Africa] is building resilient systems that can effectively anticipate and respond to extreme weather events” – highlighting that trust and local integration are as important as the technology itself. The examples of flood forecasting and wildlife monitoring also emphasize open access and transparency: Google releasing flood maps to all, Wildlife Insights sharing data globally. This is somewhat unique in the climate space, as some data has traditionally been proprietary. It represents a shift toward viewing climate-related AI tools as global public goods.
In conclusion, AI initiatives in climate change and the environment over the past five years demonstrate a significant positive impact: better predictions, smarter resource management, and faster responses are helping societies cope with and combat climate change. From the standpoint of societal transformation, these AI applications are enabling humans to understand complex environmental changes and respond in ways that were not previously possible (or at least not at the necessary speed and scale). They exhibit how AI, when guided by non-commercial objectives, can strengthen the resilience of communities on the frontlines of climate impacts and enhance conservation of our planet’s critical ecosystems. As climate change continues to accelerate, such AI-driven hope spots provide valuable lessons and tools. The next section will delve into humanitarian aid and disaster response more broadly, where some overlap with climate-related disasters exists, but also other crises where AI is making a humanitarian difference.
AI in Humanitarian Aid and Disaster Response
Humanitarian aid and disaster response have long been arenas that rely on rapid information gathering and efficient resource allocation – tasks that align well with AI’s strengths in data analysis and prediction. In the past five years, humanitarian organizations have begun to leverage AI to improve emergency response, optimize logistics, and better serve people in crisis. These efforts span natural disaster response (earthquakes, storms, floods), conflict and refugee crises, and efforts to improve general humanitarian operations like food distribution. Unlike military or defense uses of AI, the applications discussed here are firmly rooted in humanitarian principles: saving lives, reducing suffering, and upholding dignity during emergencies. Many are spearheaded by UN agencies, Red Cross societies, or humanitarian NGOs in collaboration with tech volunteers and researchers. We highlight several key ways AI is transforming humanitarian work for the better.
Rapid Damage Assessment and Crisis Mapping: When a disaster strikes – such as an earthquake or a hurricane – one of the first challenges is to assess the scope of damage: Which areas are hardest hit? How many buildings are destroyed? Where are the people in need of urgent assistance? Traditionally, this process might involve days of sending assessment teams on the ground and manual interpretation of satellite images. AI is dramatically accelerating this damage assessment phase. The United Nations World Food Programme (WFP) has developed a system called DEEP (Digital Engine for Emergency Photo-analysis), which uses machine learning to analyze aerial and satellite images of disaster zones. DEEP can scan tens of thousands of images in a matter of hours, automatically identifying damaged structures, flooded areas, blocked roads, and other critical features. WFP reported that when Hurricane Fiona hit the Caribbean in 2022, they were able to process drone imagery with DEEP in just a few hours – a task that would have taken up to three weeks with traditional manual methods. The AI produced heat maps highlighting the worst-hit areas, which directed relief teams faster than ever before to communities in need. By pinpointing where infrastructure was destroyed and where survivors were concentrated, WFP and local responders could prioritize delivering food, water, and medical care to those locations first. This undoubtedly saved lives and reduced suffering in the critical first 48 hours after the hurricane, when every hour counts.
Building on DEEP, WFP and partners (including Google Research) also created SKAI, an open-source AI tool for post-disaster assessment. SKAI was designed to be user-friendly for humanitarian analysts and was deployed in events like the 2022 Pakistan floods and the February 2023 Türkiye-Syria earthquakes. According to WFP, SKAI provided “critical insights 13 times faster and 77 percent cheaper than manual methods” in those operations. Considering the scale: after the Türkiye earthquake, millions of buildings across a vast area needed to be inspected – AI was able to rapidly classify collapsed vs. intact buildings from satellite images, offering a big-picture damage overview to the UN and local authorities. This allowed more efficient distribution of rescue teams and relief supplies. Moreover, WFP’s decision to open-source SKAI means that any government or NGO can deploy similar AI capabilities without starting from scratch or paying for expensive proprietary software. It exemplifies the humanitarian ethic of sharing life-saving technology as a public good.
Humanitarian Logistics and Resource Allocation: Another sphere where AI is delivering benefits is in managing the complex logistics of humanitarian aid – ensuring that aid gets to the right people at the right time, without waste or fraud. WFP (the world’s largest humanitarian organization fighting hunger) again provides a case in point. They have piloted an “AI-enabled supply chain optimization” system called SCOUT, which analyzes a myriad of data (crop forecasts, market prices, weather, historical delivery times) to assist in strategic decision-making about food procurement and delivery. In West Africa, SCOUT was used in 2024 to optimize how WFP sourced and pre-positioned sorghum (a staple grain) for upcoming needs. By suggesting longer-term contracts when prices were favorable and by planning shipments more efficiently, SCOUT helped save WFP about US$2 million in that year on sorghum operations. Those savings can be redirected to feed more people. This shows how AI can identify efficiencies in humanitarian operations that are otherwise hard to spot, especially in volatile contexts. It’s essentially stretching donor dollars further – a critical advantage in a world where humanitarian needs often outpace funding.
On the beneficiary management side, WFP developed an Enterprise Deduplication Solution that uses advanced algorithms to clean and cross-check beneficiary databases. In large relief operations (refugee camps, national safety nets), multiple registrations or data errors can lead to some people accidentally receiving duplicate aid and others missing out. WFP’s AI-driven deduplication system was piloted in three country offices and achieved 99.99% accuracy in detecting anomalies in beneficiary lists. This ensured assistance was delivered more equitably and saved close to $400,000 by eliminating duplicate or improper entries. Importantly, WFP noted that independent assessments found this system complied with data privacy standards – an illustration of balancing efficiency with protection principles. Such backend improvements, though not visible to the public, have a real impact: they increase the transparency and fairness of aid distribution, which in turn builds trust in humanitarian programs among both donors and recipients.
AI is also facilitating two-way communication in disasters. For example, the American Red Cross and IFRC have experimented with AI-powered chatbots on messaging apps to interact with disaster-affected communities in local languages. These chatbots can answer common questions (e.g., where is the nearest shelter? how to purify water?) and collect information on needs. WFP’s Innovation unit mentioned using chatbots for two-way communication with crisis-affected people, which can help identify urgent needs or misinformation and allow communities to voice feedback securely. In a 2019 cyclone in Mozambique, a locally deployed chatbot handled thousands of inquiries from citizens seeking help, demonstrating that such AI tools can scale up communication when human operators are overwhelmed.
Famine and Food Security Early Warning: Humanitarian aid is not only reactive; increasingly it’s trying to be proactive by predicting crises before they fully unfold. AI contributes here by analyzing complex factors that lead to crises like famine. The Hunger Map Live is an AI-powered platform by WFP that monitors food security in real-time across the globe. It combines data on conflict, weather, market prices, vegetation indices, and more, using machine learning to produce early warnings of food insecurity spikes. For example, it might flag that due to consecutive drought forecasts and economic shocks, a particular region in the Sahel could see a rapid rise in hunger in the next 3 months. This gives agencies a chance to act early – mobilizing funds, pre-positioning food stocks, or supporting livelihoods to mitigate the worst outcomes. Early action is far more cost-effective and humane than emergency response after people are already in crisis. By 2024, Hunger Map Live and similar AI-driven analytics were supporting decisions in dozens of countries, essentially functioning as a “weather forecast for hunger” that policymakers consult regularly. A concrete instance: in Somalia, AI models incorporating climate forecasts and displacement data projected a high risk of famine in late 2022, which prompted the release of anticipatory humanitarian financing that helped avert some of the worst-case scenarios (FAO/WFP report, 2023).
Another compelling story of AI in humanitarian targeting is from Togo during the COVID-19 crisis. When lockdowns and economic disruptions threatened to push many Togolese into extreme poverty, the government, supported by researchers and the charity GiveDirectly, launched “Novissi” – an emergency cash transfer program. To identify the neediest quickly, they turned to machine learning. Researchers used anonymized mobile phone data (e.g., airtime top-ups, cell tower pings) and satellite imagery to predict wealth levels across the population. The AI model helped find poor households that weren’t in any social registry – people who would have been missed by traditional targeting. A 2021 study found that this AI-based targeting outperformed other methods (like community-based targeting or simpler proxy measures) in reaching the poorest citizens. As a result, Togo was able to deliver mobile cash payments to over 100,000 vulnerable people swiftly during the pandemic’s first wave, with relatively low inclusion error (people who weren’t poor getting aid) and exclusion error (poor people left out). The success of this approach garnered attention as a model for using AI and big data in a transparent and accountable way to improve humanitarian aid targeting. It showcased that even small countries with limited resources can leverage modern AI (with international support) to empower their responses to crises, ensuring help goes to those who need it most. Notably, this was done with attention to privacy and ethics: no personal identities were revealed to the model, and the government engaged in community validation of the lists generated by AI.
Refugee Support and Protection: The UN Refugee Agency (UNHCR) and other organizations have started using AI tools to assist refugees and asylum seekers. For example, UNHCR deployed a chatbot named “Hello Azizi” in multiple languages to answer common questions for refugees in Kenya and Jordan, freeing up staff and providing 24/7 info on topics like registration procedures or how to access services. Natural language processing (NLP) allows these chatbots to handle multiple languages and dialects common among refugee populations (such as Arabic and Somali). Additionally, projects are exploring AI to help match refugees to optimal locations or services – akin to a massive matching puzzle of skills, needs, and opportunities, where algorithms could help identify, say, which city might offer the best support network or job prospects for a particular refugee family (though this is still experimental and must be balanced with refugees’ own agency and preference).
In protection work, AI image recognition has been used to reunite separated families – e.g., comparing photos of lost children in refugee camps with missing children databases. Voice recognition AI is also being trialed to detect distress or coercion in phone calls, which could aid efforts to combat human trafficking or abuse of refugees. These are nascent uses, but they show the breadth of humanitarian contexts where AI might contribute.
Cultural Heritage in Crises: A tangential but interesting application – preserving cultural heritage under threat (which can be considered part of humanitarian or peacebuilding work). During conflicts or natural disasters, important cultural sites and artifacts can be destroyed. AI has been used to create digital replicas or “digital twins” of monuments (e.g., Palmyra’s ruins in Syria or historic sites in Iraq) using photographs and 3D modeling, so that even if the physical structures are lost, their memory is preserved and can guide future reconstruction. Microsoft’s AI for Cultural Heritage (launched 2019) has supported some such projects, including an effort to digitally document endangered languages and historical artifacts. While not lifesaving in the immediate term, protecting heritage contributes to community resilience and identity in the aftermath of crises, which is an increasingly recognized aspect of recovery.
Challenges and Ethical Considerations: The deployment of AI in humanitarian contexts, while beneficial, does raise important ethical questions. Humanitarians stress that AI should augment, not replace, human decision-making in crises. As WFP put it, “AI should not replace the human element”. Decisions about who gets aid, where to prioritize, etc., involve values and context that algorithms may not fully capture. Thus, many projects pair AI systems with human oversight – e.g., AI flags patterns and humans validate them. Bias is another concern: if the data feeding an AI model is incomplete or biased (say, social media data that overrepresents certain groups), the recommendations could inadvertently skew aid. Agencies like the Red Cross Red Crescent Climate Centre have published guidelines on ethically using AI, emphasizing inclusivity (engaging affected communities in design), bias mitigation, and transparency. Data privacy is paramount as well. Humanitarian organizations handle sensitive personal data of vulnerable populations; any AI that uses this data must adhere to strict protection standards. Encouragingly, initiatives like WFP’s privacy-vetted deduplication algorithmshow that it’s possible to use AI while upholding data protection.
Connectivity is another practical challenge – many AI tools assume internet or power availability, which can be knocked out in disasters. To address this, some systems are made to run offline on local devices (e.g., UAVs with on-board AI for search-and-rescue image analysis that don’t need to send data to the cloud). UNICEF and ITU’s Giga project to connect schools to the internet, indirectly, will also expand the connectivity footprint that humanitarians can leverage in future crises.
Despite these challenges, the trend is clearly positive: AI is making humanitarian aid more anticipatory, targeted, and efficient, essentially helping humanitarians do more with limited resources in the face of growing global crises. An executive summary from a 2023 humanitarian innovation report perhaps said it best: “From weeks to hours, from gut feeling to evidence, from one-size-fits-all to individualized aid – AI is changing how we help people, provided we use it responsibly” (ICRC Humanitech, 2023). The case studies we’ve seen, from WFP’s AI saving time and money to Togo’s ML-driven cash aid reaching the poorest, all reinforce the notion that when guided by humanitarian values, AI can be a powerful force-multiplier for good.
AI for Cultural Heritage and Inclusion
While healthcare, education, climate, and humanitarian response address urgent and fundamental needs, the realm of culture and inclusion is also a critical part of societal well-being. Nonprofit and public-interest AI initiatives have extended into preserving cultural heritage, promoting linguistic diversity, and empowering people with disabilities – areas that might not directly save lives, but enrich lives and ensure that technological progress benefits everyone. In the past five years, several notable projects have used AI to safeguard endangered aspects of human civilization (like languages and historical artifacts) and to make the digital world more accessible to those with disabilities. These efforts underscore a holistic view of “social good,” where progress is not only measured in economic or health terms but also in terms of cultural continuity and social inclusion.
Preserving Endangered Languages and Heritage: Language is a key carrier of culture, and many of the world’s languages are at risk of extinction within this century. In 2019, UNESCO highlighted that roughly one third of languages have fewer than 1,000 speakers and one language dies every two weeks. In response, technologists and archivists have turned to AI to help preserve and revitalize languages. Microsoft’s AI for Cultural Heritage program, announced in 2019 as a new pillar of its AI for Good initiative, explicitly focuses on projects to preserve languages, places, and historical artifacts using AI. One example from this program is the work with communities in Mexico to digitally archive and translate Indigenous languages like Yucatec Maya and Querétaro Otomi using AI. By training language models on whatever recorded data exists and then partnering with linguists, Microsoft helped create tools that can translate these Indigenous languages to and from Spanish or English. This not only preserves the languages in a digital form but also makes them more accessible to younger generations and outsiders, for instance via a chatbot that can teach basic phrases or a text translation system that community members can use on smartphones. Such technology can breathe new life into languages that were previously oral-only and at risk of fading away as elders pass on. As Brad Smith of Microsoft noted, preserving cultural heritage is “imperative to the well-being of the world’s societies” – it fosters resilience and social cohesion by connecting people with their history and identity.
Another fascinating project under AI for Cultural Heritage was a collaboration with the Metropolitan Museum of Art in New York and MIT to use AI in making the Met’s vast collection more accessible and engaging. They explored using AI to analyze the Met’s Open Access collection (over 375,000 images of artworks) and create new ways for the public to discover art across cultures and eras. For example, an AI could recommend art pieces to users based on visual similarities or themes, regardless of the pieces’ geographic or temporal origin – revealing connections that a visitor might not find on their own. This is more about cultural education and appreciation: using AI as a “curator” to surface stories from the museum’s collection that highlight contributions of different cultures or the links between them.
In Europe, when the Notre-Dame Cathedral in Paris tragically caught fire in April 2019, there was talk of using AI and digital scans to aid its reconstruction. Prior to the fire, art historians had made detailed 3D laser scans of Notre-Dame. AI can help interpret and refine these scans to create blueprints for rebuilding, ensuring accuracy in preserving the cathedral’s original gothic design. This synergy of AI and cultural heritage came to prominence as people realized that digital backups of cultural heritage can be invaluable – and AI can enhance those backups or fill in missing pieces. Similarly, after Palmyra’s ruins were vandalized during conflict, AI was used on crowdsourced photos to reconstruct 3D models of destroyed statues.
Assistive AI for People with Disabilities: Inclusivity in the digital age requires that AI also work for those who have disabilities – visual, hearing, motor, cognitive, etc. There has been a rise in AI-powered assistive technologies, often supported by nonprofits or corporate philanthropic arms, aiming to empower these individuals. Microsoft’s AI for Accessibility initiative (a $25 million program) has funded dozens of projects. A flagship outcome is the Seeing AI app, a free “talking camera” application for the blind and low-vision community. Seeing AI uses computer vision to recognize objects, read text, describe scenes, and identify faces, narrating the world to the user in spoken words. Launched in 2017 and continually improved, Seeing AI by 2025 had been downloaded by hundreds of thousands of users and expanded to read in multiple languages (including identifying currency notes, which is very helpful for blind individuals in daily transactions) (Microsoft data, 2022). One blind user of Seeing AI wrote on Reddit, “It’s like having a pair of eyes in my pocket. I can read restaurant menus on my own now. AI can’t help with everything, but it’s given me more independence.” Such testimonials highlight the transformative personal impact of these tools.
For the Deaf and hard-of-hearing community, AI-driven captioning and translation has been a game changer. Services like Microsoft Translator and Google Live Transcribe use AI speech recognition to provide instant captions for spoken conversations or videos, benefiting Deaf users. During the pandemic, this was particularly valuable as so much schooling and work moved to videoconferencing; auto-captions made these virtual meetings accessible to those with hearing loss. The quality of AI captions has improved vastly (thanks to deep learning), and while not perfect, they often achieve 90%+ accuracy in good conditions, which is comparable to human captioners for many everyday scenarios. Additionally, projects like the aforementioned sign language avatar from University of Surrey aim to cover the cases where captions are insufficient – some Deaf individuals prefer sign language as their primary mode of understanding. That project’s ability to generate photo-realistic sign language videos on demand could, in the near future, enable websites or e-learning courses to include a “Sign Language” button that triggers an AI interpreter to sign the content for Deaf users. As of 2024, prototypes could handle a limited set of sentences, but with more training data the system will improve. There is a strong push from the disability advocacy community to ensure these AI solutions are co-designed with users, to get nuances right (for instance, ensuring that an AI signing avatar is expressive enough, because facial expression is crucial in sign languages).
In education, assistive AI shows up in personalized learning tools for students with learning disabilities. For example, some apps use AI to help children with dyslexia practice reading by adjusting text presentations to their needs, or AI to help children on the autism spectrum recognize social cues by analyzing facial expressions in teaching videos. These are typically developed by research labs or ed-tech nonprofits (like the Fred Rogers Center working on an AI social skills coach).
Another inclusive angle of AI is bridging the digital divide for those in remote or underserved communities. We’ve discussed projects like EIDU (offline education for low-connectivity areas) and flood alerts delivered via text in rural areas. There are also efforts to create AI that works with low literacy users, often using voice interfaces. For instance, a startup in India developed an AI voice assistant that farmers with little formal education could call to ask questions and get advice in their dialect; it effectively acted as an agricultural helpline. The Educational Equality Institute in Europe has looked into using AI tutors for refugee children who might have missed schooling, emphasizing culturally responsive and language-accessible AI interactions (The Educational Equality Institute, 2023 report).
Intersection of Culture and Technology Ethics: AI itself has become a subject of community discussions and cultural perspectives. There’s a growing movement to include diverse cultural viewpoints in how AI systems are developed – essentially to avoid a kind of cultural monoculture in AI. For example, the global organization Partnership on AI has a working group on AI and media integrity that, among other things, consults with indigenous groups on preserving knowledge in the age of AI and preventing exploitation of cultural data. The Maori in New Zealand, for instance, have debated whether an AI should be allowed to learn and use the Maori language without permission from Maori elders – raising questions about data sovereignty and respect for cultural heritage in AI datasets. While not a direct application, these conversations shape public-interest policies around AI (like New Zealand’s government engaging Maori representatives in its AI strategy, ensuring inclusion).
Microsoft’s work with the Musée des Plans-Reliefs in Paris to create an AI-powered mixed reality exhibit of Mont-Saint-Michel is another example of cultural heritage meets cutting-edge tech. Visitors can point a device at a historical relief map and AI will overlay animations showing the history of that French landmark. Such projects, while perhaps seen as “cultural luxury,” have deep social value: they educate the public, celebrate cultural identity, and make museums more engaging and accessible (especially to younger, tech-savvy generations). Moreover, by preserving culture, societies maintain a sense of continuity and pride, which is important for social cohesion – a fact recognized in post-conflict reconciliation efforts as well.
Metrics of Success: In cultural and inclusion projects, success is often measured not in dollars or lives, but in reach and empowerment. For language preservation, a metric might be the number of hours of speech recorded and transcribed with AI, or the number of young people learning the language through an AI app. For accessibility tools, metrics include user adoption rates and reported increases in independence or productivity among people with disabilities. For example, if an AI captioning tool is deployed in 100 government offices, enabling X number of Deaf employees to participate fully in meetings, that’s a tangible inclusion outcome. Or if an AI cultural heritage project digitizes 10,000 ancient manuscripts, making them available to scholars worldwide, the impact is on knowledge preservation and dissemination.
Already, we have some metrics: Seeing AI by 2022 had described over 20 billion objects/texts to users (as per Microsoft usage stats), indicating huge uptake. The Google Lookout app (another AI assistant for the blind) is used in 25 languages. UNESCO’s compendium of AI in Education (2021) lists over 50 projects targeting inclusion of marginalized groups, showing a growing ecosystem.
Of course, challenges remain. One is ensuring these tools are affordable and available to the communities in question – which often means open-source or free distribution, as has been the case with Seeing AI, Wildlife Insights, etc. Another is preventing biases in AI from harming inclusion: e.g., facial recognition AI has historically struggled with darker skin tones; if not fixed, that could mean an AI sign-language avatar might not track the facial expressions of a Deaf person of color as well as for others. Thus, inclusive design and extensive testing across demographic variations are vital. Many AI-for-good projects explicitly incorporate fairness testing.
In summary, AI efforts in cultural heritage and inclusion demonstrate that social good through AI is not only about material needs, but also about preserving our collective human story and ensuring everyone can participate in society. These projects, while sometimes smaller in scale, enrich communities and uphold values that define humane societies. They show AI’s versatility: the same algorithms that might drive a commercial recommendation engine can be repurposed to recommend museum artifacts to a student, or to recommend more inclusive teaching strategies to a special educator. They also reinforce the central idea of this paper: that beyond profit, AI can be a tool for empowerment, equity, and preservation of what makes us human.
Discussion
The case studies and thematic analyses presented above collectively illustrate the transformative potential of AI when applied in service of society rather than in pursuit of profit. Across healthcare, education, climate action, humanitarian aid, and culture, we have seen AI systems achieving concrete benefits: earlier diagnoses and treatments for patients, personalized support for learners, anticipatory warnings that save lives in disasters, more efficient and equitable aid delivery, and preservation of languages and heritage. In this Discussion, we synthesize key insights from these examples, examine cross-cutting themes (including factors for success and challenges), and reflect on the broader implications for policy and practice. We also address potential criticisms and ethical considerations that arise from the use of AI in the public interest. The evidence gathered suggests that, while AI is not a panacea, when deployed thoughtfully and inclusively it can significantly accelerate progress on social goals. To capitalize on this momentum, stakeholders must invest in scaling proven solutions, ensure governance frameworks keep pace, and continue prioritizing ethics and human rights in AI deployments.
Key Factors Enabling Success: A recurring pattern in our analysis is that the most successful AI-for-good initiatives are underpinned by strong partnerships and major investments. Unlike purely commercial AI products which might be driven by market demand, these public-interest projects often required visionary funding and collaboration across sectors. For example, the East Africa climate forecasting breakthrough was possible only through a partnership linking a UN agency (WFP), an academic center (Oxford), a regional organization (ICPAC), and a tech corporation’s philanthropy (Google.org). Each brought complementary strengths: local knowledge and implementation capacity, cutting-edge AI research, climate expertise, and cloud infrastructure. Similarly, the flood forecasting expansion to 80 countries by Google involved close work with national water agencies and NGOs on the ground, and Wadhwani AI’s pest management success combined AI innovation with government agricultural extension networks to reach farmers. These cases underline that multi-stakeholder collaboration is a cornerstone – AI solutions for complex social problems need input and buy-in from those who understand the problem context, those who build the technology, and those who can implement at scale.
Another critical factor is the commitment of adequate resources over time. Many of the projects highlighted (e.g., Microsoft’s AI for Good programs, Google AI Impact Challenge grantees, etc.) were bolstered by multi-year funding on the order of millions to tens of millions of dollars. While that is modest compared to the scale of global issues, it was enough to develop prototypes, conduct pilots, and iterate on solutions. Notably, these investments often came in the form of grants or in-kind support with no expectation of financial return – essentially treating these AI solutions as public goods or mission-driven endeavors. This kind of impact investment in AI is a relatively new phenomenon, but it appears to pay off in high social return on investment. For instance, WFP’s Innovation Accelerator, which provided $323 million in grants since 2015 to tech-for-good projects, has seen innovations like DEEP and Hunger Map Live widely adopted and credited with improving outcomes in multiple emergencies. The positive impacts (lives saved, money saved, people helped) arguably far exceed the initial costs, highlighting a leverage effect of well-targeted innovation funding.
Measurable Impact and Evidence: One of the goals of this paper was to emphasize measurable outcomes. Summarizing some from our case studies: In health, AI-assisted TB screening led to up to 15% more cases detected in pilots, and portable AI X-ray units are bringing screening to hundreds of remote communities. In education, an RCT in Nigeria showed an overwhelmingly positive effect on learning outcomes from an AI tutor, with participants outperforming peers in core subjects. In disaster response, WFP’s AI analysis tools cut damage assessment time from three weeks to a few hours, directly influencing faster relief to affected areas. Google’s flood alerts have reached tens of millions of people and are associated with improved preparedness (though quantifying lives saved is complex, empirical studies in India have suggested fatalities from floods have been lower than expected in areas with the alert system active, a promising sign). Wadhwani AI’s farm pest solution boosted incomes by ~20% and reduced chemical use by 25%. Togo’s ML targeting of cash aid increased coverage of the ultra-poor relative to traditional methods by significant margins (the NBER study on Novissi estimated 4–5% more of the poorest quintile were reached, while reducing inclusion of wealthier groups). These metrics provide proof of concept that AI can deliver not just theoretical or marginal improvements, but substantial, life-changing benefits.
It is also evident that these successes are not confined to one region or wealthy countries – quite the opposite. Many of the most impactful projects occurred in low and middle-income countries (India, Nigeria, Kenya, Togo, etc.). This counters a common narrative that advanced tech like AI is primarily benefiting rich nations or companies. However, it’s worth noting that most of the AI algorithms and tools still originate from a few tech hubs. What has made the difference is those tools being channeled intentionally toward global good and adapted to local contexts. As such, expanding capacity in developing countries to create and modify AI solutions themselves is an important next step, lest they remain primarily recipients of outside innovation.
Ethical and Governance Challenges: Despite the overall positive findings, the case studies also surfaced several challenges and caveats. Data privacy and security is paramount, especially when dealing with vulnerable populations. For instance, the use of mobile phone data for Togo’s cash program or satellite data for refugee movement prediction raises questions: Are individuals aware their data is used? How to prevent misuse or unauthorized access? In Togo’s case, researchers took care to anonymize and aggregate data, and the government was transparent about the criteria for aid – an approach lauded in the humanitarian community as a model for responsible data use (Long, 2022). But such diligence needs to be universal. Humanitarian organizations have begun developing data protection protocols specifically for AI, building on existing ones like the ICRC’s Data Protection Handbook (2015) and incorporating principles like not using AI that could put people at risk (e.g., facial recognition in a conflict zone where it might be misused by belligerents). Encouragingly, our findings included examples where AI systems were explicitly checked for compliance with privacy standards.
Bias and fairness in AI outputs represent another major concern. Many of our cited projects explicitly tackled bias or were designed to reduce human bias (e.g., AI removing bias from beneficiary selection by focusing on data-driven indicators rather than potentially biased human judgment). However, AI itself can introduce bias if the training data are not representative. For example, if disaster damage assessment AI is trained mostly on urban building images, it might underperform in rural or different architectural contexts. Developers like those of SKAI are aware of this and have sought to train on diverse data, but the risk remains and continual validation is necessary. Fairness also means ensuring AI doesn’t inadvertently exclude those who are “off the data grid.” If, say, a hunger prediction algorithm relies on satellite crop data and phone mobility data, it might miss a pastoral community that doesn’t farm or use phones. Recognizing this, many humanitarian AI users treat model outputs as one input among several, and still rely on local knowledge to fill gaps.
A third ethical aspect is transparency and accountability. Some AI systems, especially complex deep learning ones, can be “black boxes.” In critical contexts like deciding aid or diagnosing disease, it’s important to retain human interpretability. We saw WFP mention the need for interpretable models in high-stakes environments to avoid blind trust in a “black box”. In practice, this has led to combining AI with heuristic or rule-based approaches and providing explainers for AI decisions (e.g., showing which features led the AI to flag a beneficiary record as duplicate). Going forward, practitioners advocate for “human-in-the-loop” designs – AI gives a recommendation, a human reviews and makes the final call, especially when consequences are significant.
Scalability and Sustainability: Many projects we discussed are pilots or at early stages. A crucial question is how to scale them sustainably. Scalability has multiple dimensions: reaching more people/areas, integrating into existing systems, and continuing over time (financially and operationally). Some initiatives have already scaled impressively (Google’s flood AI went from one country to 80 in four years, Microsoft AI for Health went to 200+ partnerships in under three years). Others are in one locale and need expansion. Often, scaling requires transitioning from proof-of-concept to institutionalization. For example, the Nigeria AI tutoring might be adopted by the national education ministry if results continue to impress, integrating it into national curriculum support – which would be a big scale-up. But that means ensuring local infrastructure (computers, power) is in place and training educators to use it. Similarly, sustaining flood forecasting in 80 countries means continuously updating models and maintaining partnerships with local agencies – something Google’s team and local governments must plan for as a long-term collaborative service, not a one-off project.
Funding for scale is another concern. Grant-funded pilots can flounder after the grant ends. Encouragingly, there’s a trend of governments stepping up: e.g., India taking Wadhwani’s app nationwide, or Kenya’s government working with EIDU for broader deployment. International financial institutions are also interested in backing scale-up of proven solutions (the World Bank might fund expansion of an AI education project in multiple African countries if evidence is solid). New financing models, like social impact bonds or development impact bonds tied to outcomes (some have been trialed in education and health), could potentially incorporate AI-based interventions – investors pay for the implementation and get repaid by donors/governments if targets (like improved test scores or reduced disease incidence) are met, thereby mainstreaming these interventions.
Global Equity in AI: A broader implication of our findings is how they relate to the global discourse on AI equity. There has been concern that AI could exacerbate inequalities – between those who have AI capabilities and those who do not. The projects highlighted here flip that narrative: they represent AI being used intentionally to reduce inequalities (health gaps, education gaps, disaster vulnerability gaps). However, it’s crucial to ensure that these benefits reach the most marginalized. We noted issues like digital infrastructure being a limiting factor; tackling that is a prerequisite (hence initiatives like universal internet access being complementary to AI for good). Moreover, capacity-building so that local data scientists and engineers in developing countries can lead AI-for-good projects is essential for true equity. Some encouraging signs include workshops like “AI for Social Good” held in Africa, Asia, and Latin America (often sponsored by groups like UNESCO or Mila) training the next generation of practitioners. The Global Partnership on AI (GPAI) has an “AI & Pandemic Response” and “AI & Climate” working group that deliberately includes experts from the Global South to share lessons and resources.
Public Trust and Involvement: For nonprofit and public sector AI to thrive, public trust is key. If communities perceive AI as a mysterious or threatening force imposed on them, they might resist its use or not fully cooperate (for instance, mistrusting AI advisories). The projects we described generally involved community engagement: e.g., midwives in Guatemala co-designed the toolkit for maternal health; refugee communities often help refine chatbot information; local volunteers ground-truth WFP’s Hunger Map predictions. When people see that AI tools are augmenting, not replacing, human care and that they can improve services, trust builds. A telling example is from the Red Cross’s social media analytics during disasters: initially, responders doubted that scanning Twitter could help, but after one AI model flagged a flooded village that had not called for help through official channels, thus prompting a timely rescue, the skepticism turned into active support (IFRC, 2021 case study). So success stories themselves drive acceptance. Nevertheless, transparency with the public about when AI is used (say, telling aid recipients that an algorithm helped determine assistance, and explaining the criteria in plain language) is recommended to avoid misunderstanding.
Emerging Trends: Looking ahead, a few trends could further shape AI for social good. The rise of generative AI (e.g., GPT-4 and beyond) opens new possibilities and challenges. On one hand, generative AI can create content (text, images, even synthetic data) which might help where data is scarce – for instance, simulating rare disease cases to train diagnostic algorithms, or generating localized educational content as Rocket Learning is doing. On the other, it raises risks like misinformation (deepfakes) which could harm social cohesion and trust. Our examples didn’t delve deeply into that, but as generative AI spreads, ensuring it is used responsibly (as a tool for good actors, and mitigated against misuse by bad actors) will be part of the governance challenge.
Another trend is local innovation ecosystems for AI for good. Countries like India, Kenya, and Tunisia have started “AI for social good” hackathons and incubators. This democratizes the problem-solving process and may yield solutions very tailored to local needs (like an Arabic NLP tool for refugees developed by a Jordanian startup). The more AI for good is embedded in grassroots innovation, the more likely the field will produce sustainable, context-appropriate solutions.
Conclusion of Discussion: The various dimensions explored in this discussion point to a overarching conclusion: AI has proven its value in addressing societal challenges, and with prudent management, its benefits can far outweigh its downsides. We are witnessing a paradigm shift where major institutions and communities are embracing AI not as an end in itself, but as an instrument to achieve humanitarian and development ends. This aligns with a global call (from the UN Secretary-General to grassroots activists) to ensure AI is aligned with human values and the public interest. Our analysis suggests that this alignment is not only possible, but already happening in many corners of the world. The task now is to scale, sustain, and govern these efforts to truly transform society.
In drawing lessons, a few stand out:
Ultimately, the diversity of examples – from high-tech weather models saving lives to simple AI chatbots giving a deaf student equal access – paints a hopeful picture. These are not isolated miracles; they are reproducible approaches that many communities can benefit from if given support. As one stakeholder put it during an AI for Good panel, “We are at a juncture where for the first time, many solutions to age-old problems are within reach – thanks in part to AI. The question is, do we have the collective will to deploy them for those who need them most?” (AI for Good Summit, 2022). The evidence in this paper suggests that when that will is present – as shown by the initiatives of governments, companies, and NGOs described – AI becomes a tool of hope, not hype, tangibly improving human and societal outcomes.
Conclusion
In the mid-2020s, amid widespread narratives of AI’s disruptive power, the stories documented in this paper offer a compelling counter-narrative: Artificial intelligence, leveraged through nonprofit, academic, and public sector initiatives, is already driving positive transformation across the globe. From remote villages on the frontlines of climate change to overcrowded urban hospitals, AI technologies are being deployed to enhance human well-being, guided by values of equity, solidarity, and sustainability. Our extensive survey of the past five years reveals that global AI-for-good efforts have moved well beyond pilot ideas—they are delivering results at scale and informing a new paradigm of technological development “beyond profit.”
Several overarching conclusions emerge:
1. AI is a Force-Multiplier for Social Impact: In each sector examined, AI has amplified what human experts and frontline workers can do. In healthcare, it is accelerating diagnostics and reaching patients who previously lacked access to specialists, contributing to saving lives and reducing health disparities. In education, AI tutoring systems and content generators are helping personalize learning for millions of children, including those in underserved communities, thereby improving educational outcomes and inclusion. AI-driven climate and disaster prediction is buying precious time for communities to prepare and adapt, significantly cutting losses and casualties. Humanitarian operations enhanced by AI have become more efficient and equitable, ensuring aid goes further and reaches those in greatest need. While traditional approaches in these fields often struggled with scale or speed, AI has introduced solutions that operate at higher scale and in real-time, fundamentally improving effectiveness. As one report succinctly noted, AI can enable humanitarian and development efforts to shift “from reactive to proactive, from broad-brush to precision”. Our findings strongly support this: forecasting famines so they can be mitigated, tailoring education to each child so no one falls through the cracks, and so on.
2. Major Investments and Partnerships Have Catalyzed a Global Movement: The advances in applying AI for public good did not happen by accident; they are the result of intentional investments and coalition-building by diverse stakeholders. Tech companies have devoted substantial resources through philanthropic programs (e.g., Microsoft’s $125M AI for Good fund, Google.org’s AI Impact Challenge and Global Goals grants) to empower nonprofit initiatives. Governments of various countries, from India to Togo, have shown openness to innovative AI approaches in public service when evidence shows impact. International organizations and foundations have played convening and funding roles (as seen with UN agencies, the World Bank, the Rockefeller Foundation, and others supporting projects). Crucially, partnerships between AI experts and domain experts (doctors, teachers, agronomists, crisis managers) were a common ingredient in success. Cross-sector collaboration emerges as a best practice—marrying technical prowess with field experience and community trust. This global movement is further sustained by knowledge-sharing platforms (like the ITU’s AI for Good community, and academic networks) which ensure that lessons and tools are disseminated widely. As a result, AI-for-good projects in one locale often inspire and inform others elsewhere, creating a virtuous cycle of innovation. We have thus seen a “network effect,” where the more stakeholders commit to using AI for social good, the more normalized and expected it becomes as part of the toolkit for addressing societal issues.
3. Measurable Positive Impacts Underscore that AI for Good is More Than Hype: A critical contribution of this research has been to compile verifiable outcomes that ground the conversation in facts. The case studies provide quantitative and qualitative evidence – lives saved, percentages improved, costs reduced, behaviors changed. For example, flood forecasting AI delivering warnings to 460 million people, an education AI program boosting test scores significantly in a controlled trial, or an AI health initiative screening tens of thousands for diseases that would have gone undetected. These outcomes counter skepticism that AI’s societal benefit is merely theoretical or marginal. On the contrary, the benefits are concrete and often quite large in magnitude. Importantly, many of these projects have undergone external evaluation or been documented in reputable sources, adding credibility. This growing evidence base should encourage policymakers and the public to support and expand AI-for-good programs. It also provides a rebuttal to narratives that frame AI solely as a threat or commercial gimmick – demonstrating instead that AI, under the right stewardship, is already contributing to human development and can do even more.
4. Ethical Deployment and Inclusion are Imperative and Achievable: Throughout the initiatives examined, there is a clear recognition that responsible AI practices are not optional; they are fundamental to success. Many project leads imposed strict data governance (e.g., anonymization protocols, community consent, alignment with humanitarian data ethics) and bias mitigation strategies from the outset. We also saw that inclusive design – involving women, marginalized groups, local communities in the AI solution development – led to more effective and trusted outcomes (for instance, girls benefiting strongly from the AI tutoring in Nigeria because the program was attentive to gender dynamics). The mantra “AI should not replace the human element” is echoed across humanitarian AI guidelines, and indeed none of the successful projects attempted to remove human experts or decision-makers. Instead, they freed those humans to focus on empathy, strategy, or higher-level tasks by handling the drudgery or complexity. This balanced approach appears to maintain human accountability and moral judgment in the loop, which is essential for public acceptance. The takeaway is that ethical AI is entirely compatible with impactful AI – in fact, it is a prerequisite for impact at scale because communities will reject or subvert technology they find harmful or unfair. The projects in this paper serve as models for how to integrate principles like fairness, transparency, and privacy into AI solutions without undermining functionality. They reinforce calls for continued development of governance frameworks (e.g., government regulations or industry standards on AI in sensitive sectors) that ensure all AI, not just nonprofit ones, align with societal values.
5. A Balanced Global Perspective with Local Empowerment: We made a conscious effort to maintain a global lens – highlighting examples from Asia, Africa, the Americas, and Europe, as well as from low-income, middle-income, and high-income contexts. The pattern that emerges is one of shared challenges and shared solutions. A flood in Bangladesh and a flood in Italy both can benefit from AI early warning; a rural teacher in India and one in California both can use AI to personalize lessons. However, the implementations must be locally adapted – language, culture, infrastructure vary. The initiatives that thrived did so by customizing AI models to local data and involving local champions (e.g., training community health workers in Uganda to use an AI microscope tool). Thus, while AI is often developed centrally, its deployment must be decentralized and context-aware. Encouragingly, some low-income countries have become innovators in their own right (like Togo’s early adoption of AI for social protection). This underscores a hopeful trend: AI for good is not limited to technologically advanced nations; with the right partnerships, it can leapfrog into any country to address pressing needs. As capacity grows, we foresee more home-grown AI solutions emerging from developing regions, tailored to their specific needs and possibly exported to the rest of the world (for example, India’s AI pest management or Kenya’s AI education ideas might benefit farmers or students elsewhere). Supporting these South-South knowledge exchanges and local innovation ecosystems will be critical to sustain momentum. In the end, global transformation will occur when every nation can harness AI in ways that align with their priorities and strengths. The work cited in this paper is laying the groundwork for exactly that democratization of AI capabilities.
Final Reflections: The title of this paper posits “There Is Hope,” and the body of evidence we have assembled validates that assertion. In a time of global uncertainty – marked by pandemics, climate crises, and inequality – the narrative of hope is often hard-won. Here we see that hope is instantiated in real projects and outcomes: a mother in a remote village receives life-saving TB treatment thanks to an AI scan; a student in a slum gains skills that open future opportunities with an AI tutor; a small farmer secures her harvest against pests with an AI advisory; aid reaches a family before hunger does because AI helped anticipate a drought. These are profound improvements in quality of life and human security.
Moreover, beyond individual stories, these successes signal a more systemic shift. They suggest that the trajectory of AI development can be steered by human intent and values. AI does not inherently drive profit or peril; its course is shaped by those who design and deploy it. Over the past five years, we have witnessed a conscientious effort by parts of the global community to steer AI towards altruistic ends. That effort is yielding fruit. It hints at a future where AI might play a key role in achieving ambitious global agendas such as the UN Sustainable Development Goals (which, as of 2023, are largely off-track). For example, AI could accelerate progress on SDG 3 (health) by extending medical services, on SDG 4 (education) by enabling personalized lifelong learning, on SDG 13 (climate action) by enhancing adaptation and resource efficiency, and so forth. Indeed, experts believe AI could help advance every single SDG if properly applied. The case studies in this paper serve as early exemplars of that potential.
Of course, realizing this future is not automatic. It will require continued advocacy, funding, innovation, and ethical oversight. It will also require tackling broader issues such as digital infrastructure gaps and AI literacy so that all countries can benefit. But the momentum is clearly building. Importantly, the narrative of AI for good is also fostering collaboration rather than competition – unlike some zero-sum technological races, AI for social good often involves open platforms, shared learnings, and collective problem-solving (the open-source nature of many tools like SKAI, or collaborative data platforms like Wildlife Insights, exemplify this). This ethos will be vital in ensuring that the “transforming society” aspect remains inclusive and participatory.
In closing, the evidence and analysis presented affirm that AI, guided by human empathy and ingenuity, is already transforming society in meaningful ways beyond the pursuit of profit. These initiatives provide a blueprint for expanding such efforts. Stakeholders at all levels – international bodies, national governments, tech companies, academia, civil society, and local communities – should take heart from these successes and redouble their commitment to AI for public good. As Jesse Mason of WFP succinctly put it, “We have the potential to change the world” with these tools. The onus is on us to ensure that this potential is realized widely and ethically. The early returns on investing in AI for good are exceedingly promising: more lives saved, more opportunities created, more heritage preserved, and more people included. Thus, even as we remain clear-eyed about AI’s challenges, we can confidently echo the sentiment in our title – “There is hope.” The global AI initiatives of the past five years demonstrate that hope is not merely an abstract ideal, but a tangible reality being built, algorithm by algorithm, community by community, toward a future where technology truly serves humanity as a whole.
References
Bankhwal, M., Bisht, A., Chui, M., Roberts, R., & van Heteren, A. (2024, May). AI for social good: Improving lives and protecting the planet. McKinsey & Company.
Beltrami, S., & Anthem, P. (2025, May 15). Does AI hold the key to ending hunger? World Food Programme.
De Simone, M. E., Tiberti, F., Mosuro, W., Manolio, F., Barron, M., & Dikoru, E. (2025, January 9). From chalkboards to chatbots: Transforming learning in Nigeria, one prompt at a time. World Bank Blogs.
GiveDirectly. (2021, July 28). Study: AI targeting helped reach more of the poorest people in Togo [Blog post].
Google. (2021, March). Wildlife Insights helps capture the beauty of biodiversity, as well as its fragility. Google Sustainability Stories.
Google. (2023). AI for the Global Goals: 2023 Google.org grant recipients. Google Global Goals.
Matias, Y. (2024, March 20). How we are using AI for reliable flood forecasting at a global scale. Google Keyword Blog.
IANS. (2022, November 5). Google expands India-first flood tracking tool to more countries. The Economic Times.
Smith, B. (2019, July 11). As technology like AI propels us into the future, it can also play an important role in preserving our past. Microsoft On the Issues (Blog).
Microsoft Research. (n.d.). AI for Health – Overview. Retrieved 2025, from Microsoft Research Projects.
Sharma, B. (2021, November 11). Google's AI-based flood forecasting system is saving lives in India: Here's how. IndiaTimes.
Unitaid. (2025, May 20). Unitaid and partners cut cost of portable AI-compatible X-ray device, bringing TB screening closer to communities [Press release].
University of Oxford. (2024, June 25). AI-led science innovation protects communities hit by climate change [Press release].
Varanasi, L. (2024, September 22). Google CEO Sundar Pichai says AI will advance humanity in 4 key ways. Business Insider.
World Food Programme. (2024). Innovation and technology. WFP.org (Our Work section).
World Food Programme. (2023). WFP Innovation Accelerator – 2023 Impact Summary. Munich: WFP.
WFP & FAO. (2023). Hunger Hotspots: FAO-WFP early warnings on acute food insecurity. Rome: United Nations.
About
"Dr. Del Valle is an International Business Transformation Executive with broad experience in advisory practice building & client delivery, C-Level GTM activation campaigns, intelligent industry analytics services, and change & value levers assessments. He led the data integration for one of the largest touchless planning & fulfillment implementations in the world for a $346B health-care company. He holds a PhD in Law, a DBA, an MBA, and further postgraduate studies in Research, Data Science, Robotics, and Consumer Neuroscience." Follow him on LinkedIn: https://lnkd.in/gWCw-39g
✪ Author ✪
With 30+ published books spanning topics from IT Law to the application of AI in various contexts, I enjoy using my writing to bring clarity to complex fields. Explore my full collection of titles on my Amazon author page: https://www.amazon.com/author/ivandelvalle
✪ Academia ✪
As the 'Global AI Program Director & Head of Apsley Labs' at Apsley Business School London, Dr. Ivan Del Valle leads the WW development of cutting-edge applied AI curricula and certifications. At the helm of Apsley Labs, his aim is to shift the AI focus from tools to capabilities, ensuring tangible business value.
There are limited spots remaining for the upcoming cohort of the Apsley Business School, London MSc in Artificial Intelligence. This presents an unparalleled chance for those ready to be at the forefront of ethically-informed AI advancements.
Contact us for admissions inquiries at:
UK: +442036429121
USA: +1 (425) 256-3058