AI for the Underserved: Building Bridges, Not Just Models

AI for the Underserved: Building Bridges, Not Just Models

Reflections from the HIMSS AI in Healthcare Forum

One of the more meaningful sessions I attended this week at the HIMSS AI in Healthcare Forum in Brooklyn focused not on the flashiest AI capabilities, but on the real-world transformation happening in America’s safety net hospitals and health centers.

While Solution Providers have outstanding offerings showcased at the conference, many of them are costly.  Large hospital systems and other healthcare providers can afford the most elegant and innovative solutions for sure, but this article is about a talk from the other end of the spectrum - where budgets are much tighter. 

The session, Safety Net Innovators: Bringing Digital Transformation to Underserved Hospitals, brought together a deeply committed group of professionals from community health centers, public hospitals, and rural systems across the country. These organizations care for over 70 million people—often with limited resources, aging infrastructure, and an overwhelming number of patients navigating complex socioeconomic conditions.

What made this conversation powerful wasn’t just the tech (though that’s certainly part of it). It was the grit, humility, and clarity these leaders showed in how they are implementing AI responsibly—and equitably—against enormous odds.

The Health AI Partnership (HAIP), which convened this session, has created a framework for “AI in the real world.” It includes governance templates, decision-point frameworks, peer mentorships, and a technical assistance model to help under-resourced institutions move from aspiration to implementation. This isn't just talk—it’s practical, roll-up-your-sleeves guidance.

Here are some of the themes that resonated with me:

1. The Digital Divide is Real—and Nuanced

Several speakers emphasized that many of their patients lack internet access, speak languages other than English, or face significant barriers to navigating digital health tools. But the divide isn't just external—it’s internal, too. Some safety net providers rely on legacy EHRs, shared multi-tenant Epic instances, or have limited IT support to configure or access predictive tools already available.

AI won’t solve these problems. But it can worsen them if we’re not intentional. That’s why shared governance, transparency, and community involvement matter so much.

2. Start Small, but Start Thoughtfully

Most of the organizations featured started with low-risk use cases: ambient scribing to reduce provider burnout, predictive analytics for missed appointments, or image-based screening for diabetic retinopathy. These are areas where AI can augment, not replace, clinical care—and free up humans to do what they do best: care, connect, and decide.

What stood out was the discipline in how they approached this: with training programs for staff, patient consent processes, vendor vetting committees, and governance boards that include clinicians and community representatives.

3. Failure is a Teacher

Nearly every panelist admitted to having “gotten it wrong” at first. One org deployed a vendor’s AI solution before they were ready—and it failed. But they didn’t stop there. Instead, they redesigned their approach, prioritized data governance, brought in external expertise, and re-engaged their board and clinicians.

That willingness to learn, adapt, and improve is something I wish more tech-forward institutions would emulate.

4. Vendors, Take Note

A poignant theme throughout the session was the mismatch between what’s offered and what’s needed. Safety net organizations don’t need shiny demos—they need responsible partners who will listen, adapt, and co-create. They need tools that work with their existing infrastructure and staff capacity—not platforms that assume a data science team is standing by.

If you’re building AI for healthcare and not engaging with the safety net, you’re missing both a moral imperative and a market opportunity.

5. A Call to the Public Sector

Funding came up repeatedly—and not just as a plea for dollars, but for structure. Just as the HITECH Act helped under-resourced orgs adopt EHRs a decade ago, speakers called for similar support for AI: regional extension centers, grant-funded implementation coaches, and clear regulatory frameworks that include the needs of small and rural providers—not just academic medical centers.

If we want AI to advance health equity, public policy needs to lead with it.




As the founder of the CxO Security Forum, I often find myself in rooms with CISOs, CIOs, and tech strategists focused on trust, safety, and governance. This session was a reminder that digital trust is not just about firewalls or data encryption—it’s about who gets left out.

It was also a reminder that innovation doesn’t have to mean moving fast and breaking things. Sometimes, it means moving carefully and building things that last—especially for those who have been left behind too often.

If you're working in health tech and not listening to these voices, I’d strongly encourage you to start. They are building something the rest of us would do well to learn from.

— - Michael Hiskey Cybersecurity Community Leader & Advocate for Ethical AI Founder, CxO Security Forum

 

Ciera Thomas

Research and Product Manager at DIHI

4w

Michael, thank you for taking the time to write such a thoughtful reflection on our panel! It means a lot to see how these conversations about AI equity in healthcare are resonating. The experiences and capabilities of safety net HDOs are an essential consideration as we move forward with AI adoption in the broader healthcare landscape. Grateful to you for helping amplify these voices!

Like
Reply
Michael Hiskey

I have a very particular set of skills. Skills acquired over a long career. Skills that make me a huge asset to any organization...

1mo
Like
Reply

To view or add a comment, sign in

Others also viewed

Explore topics