How Agentic AI is transforming healthcare delivery
In the Agents of Change podcast, host Anthony Witherspoon welcomes Archie Mayani, Chief Product Officer at GHX (Global Healthcare Exchange), to explore the vital role of artificial intelligence (AI) in healthcare.
GHX is a company that may not be visible to the average patient, but it plays a foundational role in ensuring healthcare systems operate efficiently. As Mayani describes it, GHX acts as "an invisible operating layer that helps hospitals get the right product at the right time, and most importantly, at the right cost."
GHX’s mission is bold and clear: to enable affordable, quality healthcare for all. While the work may seem unglamorous, focused on infrastructure beneath the surface, it is, in Mayani’s words, “mission critical” to the healthcare system.
Pioneering AI in the healthcare supply chain
AI has always been integral to GHX’s operations, even before the term became a buzzword. Mayani points out that the company was one of the early adopters of technologies like Optical Character Recognition (OCR) within healthcare supply chains, long before such tools were formally labeled as AI.
This historical context underlines GHX’s longstanding commitment to innovation.
Now, with the rise of generative AI and agentic systems, the company’s use of AI has evolved significantly. These advancements are being harnessed for:
Predicting medical supply shortages
Enhancing contract negotiations for health systems
Improving communication between clinicians and supply chain teams using natural language interfaces
All of these tools are deployed in service of one goal: to provide value-based outcomes and affordable care to patients, especially where it’s needed most.
Adobe Experience Cloud
Personalization improves ROI in the era of AI. New research from Adobe and Forrester shows most brands still miss the mark on personalization. Experience Leaders, however, use generative AI and data to deliver what buyers want, resulting in better engagement, higher ROI, and stronger long-term value.
Building resilience into healthcare with “Resiliency AI”
GHX builds resilience. That’s the ethos behind their proprietary system, aptly named Resiliency AI. The technology isn’t just about automation or cost-savings; it’s about fortifying healthcare infrastructure so it can adapt and thrive in the face of change.
Mayani articulates this vision succinctly: “We are not just building tech for healthcare… we are building resilience into healthcare.”
Anthony, the podcast host, highlights a key point: AI's impact in healthcare reaches far beyond business efficiency. It touches lives during their most vulnerable moments.
The episode highlights a refreshing narrative about AI: one not focused on threats or ethical concerns, but rather on how AI can be an instrument of positive, human-centered change.
The imperative of responsible AI in healthcare
One of the core themes explored in this episode of Agents of Change is the pressing importance of responsible AI; a topic gaining traction across industries, but particularly crucial in healthcare. Host Anthony sets the stage by highlighting how ethics and responsibility are non-negotiable in sectors where human lives are at stake.
Archie Mayani agrees wholeheartedly, emphasizing that in healthcare, the stakes for AI development are dramatically different compared to other industries. “If you're building a dating app, a hallucination is a funny story,” Mayani quips. “But in [healthcare], it's a lawsuit; or worse, a life lost.” His candid contrast underscores the life-critical nature of responsible AI design in the medical field.
Transparency and grounding: The foundation of ethical AI
For GHX, building responsible AI begins with transparency and grounding. Mayani stresses that these principles are not abstract ideals, but operational necessities.
“Responsible AI isn’t optional in healthcare,” he states. It's embedded in how GHX trains its AI models, especially those designed to predict the on-time delivery of surgical supplies, which are crucial for patient outcomes.
To ensure the highest level of reliability, GHX's AI models are trained on a diverse range of data:
Trading partner data from providers
Fulfillment records
Supplier reliability statistics
Logistical delay metrics
Historical data from natural disasters
This comprehensive data approach allows GHX to build systems that not only optimize supply chain logistics but also anticipate and mitigate real-world disruptions, delivering tangible value to hospitals and, ultimately, patients.
Explainability is key: AI must justify its decisions like a clinician
One of the most compelling points Archie Mayani makes in the discussion is that AI must explain its logic with the clarity and accountability of a trained clinician. This is especially important when dealing with life-critical healthcare decisions. At GHX, every disruption prediction produced by their AI system is accompanied by a confidence score, a criticality ranking, and a clear trace of the data sources behind the insight.
“If you can't explain it like a good clinician would, your AI model is not going to be as optimized or effective.”
This standard of explainability is what sets high-functioning healthcare AI apart. It’s not enough for a model to provide an output; it must articulate the “why” behind it in a way that builds trust and enables action from healthcare professionals.
Avoiding AI hallucinations in healthcare
Mayani also reflects on historical missteps in healthcare AI to highlight the importance of data diversity and governance. One case he references is early AI models for mammogram interpretation. These systems produced unreliable predictions because the training data lacked diversity across race, ethnicity, and socioeconomic background.
This led to models that “hallucinated”, not in the sense of whimsical errors, but with serious real-world implications. For example, differences in breast tissue density between African American and Caucasian women weren’t properly accounted for, leading to flawed diagnostic predictions.
To counteract this, GHX emphasizes:
Inclusive training datasets across demographic and physiological variables
Rigorous data governance frameworks
A learning mindset that adapts models based on real-world feedback and outcomes
This commitment helps ensure AI tools in healthcare are equitable, reliable, and aligned with patient realities, not just technical possibilities.
The conversation also touches on a universal truth in AI development: the outputs of any model are only as good as the inputs provided. As Anthony notes, AI doesn't absolve humans of accountability. Instead, it reflects our biases and decisions.
“If an AI model has bias, often it's reflective of our own societal bias. You can't blame the model; it's showing something about us.”
This reinforces a central thesis of the episode: Responsible AI begins with responsible humans; those who train, test, and deploy the models with intention, transparency, and care.
Want a legal and ethical perspective on AI?
While Archie Mayani offers deep insight into how AI is transforming healthcare responsibly, another must-hear conversation comes from the Explainable AI podcast.
In this episode, hosts Paul Anthony Claxton and Rohan Hall sit down with Omeed Tabiei (self-proclaimed “coolest lawyer ever”) to unpack the legal and ethical challenges shaping the future of AI.
From data privacy and compliance to AI liability and ownership, the conversation explores who’s accountable when AI gets it wrong, and how we can build safer, smarter systems.
Earning confidence in AI-driven healthcare
As AI becomes more embedded in healthcare, public fear and discomfort are natural reactions, particularly when it comes to technologies that influence life-altering decisions. Anthony captures this sentiment, noting that any major innovation, especially in sensitive sectors like healthcare, inevitably raises concerns.
Archie Mayani agrees, emphasizing that fear can serve a constructive purpose. "You're going to scale these agents and AI platforms to millions and billions of users," he notes. "You better be sure about what you're putting out there." That fear, he adds, should drive greater diligence, bias mitigation, and responsibility in deployment.
The key to overcoming this fear? Transparency, communication, and a demonstrable commitment to ethical design. As Mayani and Anthony suggest, trust must be earned, not assumed. Building that trust involves both technical rigor and emotional intelligence to show stakeholders that AI can be both safe and valuable.
The challenge of scaling agentic AI in healthcare
With a strong foundation in ethical responsibility, the conversation shifts to a pressing concern: scaling agentic AI models in healthcare environments. These are AI systems capable of autonomous decision-making within predefined constraints, highly useful, but difficult to deploy consistently at scale.
Mayani draws an apt analogy: scaling agentic AI in healthcare is like introducing a new surgical technique.
"You have to prove it works, and then prove it works everywhere."
This speaks to a fundamental truth in health tech: context matters. An AI model trained on datasets from the Mayo Clinic, for example, cannot be transplanted wholesale into a rural community hospital in Arkansas. The operational environments, patient demographics, staff workflows, and infrastructure are vastly different.
Key barriers to AI scalability in healthcare
Contextual variability. Every healthcare setting is unique in terms of needs, infrastructure, and patient populations.
Data localization. Models must be fine-tuned to reflect local realities, not just generalized benchmarks.
Performance assurance. At scale, AI must remain accurate, explainable, and effective across all points of care.
For product leaders like Mayani, scale and monetization are the twin pressures of modern AI deployment. And in healthcare, the cost of getting it wrong is too high to ignore.
GHX’s resiliency center: A scalable AI solution in action
To illustrate how agentic AI can be successfully scaled in healthcare, Archie Mayani introduces one of GHX’s flagship products: Resiliency Center. This tool exemplifies how AI can predict and respond to supply chain disruptions at scale, offering evidence-based solutions in real time.
Resiliency Center is designed to:
Accurately categorize and predict potential disruptions in the healthcare supply chain
Recommend clinical product alternatives during those disruptions
Integrate seamlessly across dozens of ERP systems, even with catalog mismatches
Provide evidence-backed substitute products, such as alternatives to specific gloves or catheters likely to be back-ordered
These "near-neighborhood" product recommendations are not only clinically valid, but context-aware. This ensures that providers always have access to the right product, at the right time, at the right cost, a guiding principle for GHX.
“The definition of ‘right’ is really rooted in quality outcomes for the patient and providing access to affordable care, everywhere.”
This operational model is a clear example of scaling with purpose. It reflects Mayani's earlier point: you can’t scale effectively without training on the right datasets and incorporating robust feedback loops to detect and resolve model inaccuracies.
Making sense of healthcare data
As the conversation shifts to the nature of healthcare data, Anthony raises a key issue: data fragmentation. In healthcare, data often exists in disconnected silos, across hospitals, systems, devices, and patient records, making it notoriously difficult to use at scale.
Mayani affirms that overcoming this fragmentation is essential for responsible and effective AI. The foundation of scalable, bias-free, and high-performance AI models lies in two critical pillars:
Data diversity. AI systems must be trained on varied and inclusive datasets that reflect different patient populations, healthcare contexts, and operational environments.
Data governance. There must be strict protocols in place to manage, verify, and ethically handle healthcare data. This includes everything from ensuring data integrity to setting up feedback mechanisms that refine models over time.
“All of that, scaling, performance, bias mitigation, it ultimately comes down to the diversity and governance of the data.”
This framing offers a critical insight for healthcare leaders and AI practitioners alike: data is the bedrock of trustworthy AI systems in medicine.
Why local context and diverse data matter in healthcare AI
One of the most illustrative examples of data diversity’s value came when GHX’s models flagged a surgical glove shortage in small rural hospitals, a disruption that wasn't immediately visible in larger healthcare systems. Why?
Rural hospitals often have different reorder thresholds.
They typically lack buffer stock and have fewer supplier relationships compared to large Integrated Delivery Networks (IDNs).
This nuanced insight could only emerge from a truly diverse dataset. As Archie Mayani explains, if GHX had only trained its models using data from California, it might have overlooked entirely seasonal and regional challenges, like hurricanes in the Southeast or snowstorms in Minnesota, that affect supply chains differently.
“Healthcare isn’t a monolith. It’s a mosaic.”
That mosaic requires regionally relevant, context-sensitive data inputs to train agentic AI systems capable of functioning across a broad landscape of clinical settings.
Responsible AI should be your new USP. Find out why here.
Trust and data credibility: The often overlooked ingredient
Diversity in data is only part of the solution. Trust in data sources is equally critical. Archie points out a fundamental truth: not all datasets are equally valid. Some may be outdated, siloed, or disconnected from today’s realities. And when AI systems train on these flawed sources, their predictions suffer.
This is where GHX’s role as a trusted intermediary becomes essential. For over 25 years, GHX has served as a neutral and credible bridge between providers and suppliers, earning the trust required to curate, unify, and validate critical healthcare data.
“You need a trusted entity… not only for diverse datasets, but the most accurate, most reliable, most trusted datasets in the world.”
GHX facilitates cooperation across the entire healthcare data ecosystem, including:
Hospitals and providers
Medical suppliers and manufacturers
Electronic Medical Record (EMR) systems
Enterprise Resource Planning (ERP) platforms
This integrated ecosystem approach ensures the veracity of data and enables more accurate, bias-aware AI models.
Diversity and veracity: A dual mandate for scalable AI
Anthony aptly summarizes this insight as a two-pronged strategy: it's not enough to have diverse datasets; you also need high-veracity data that’s trusted, updated, and contextually relevant. Mayani agrees, adding that agentic AI cannot function in isolation; it depends on a unified and collaborative network of stakeholders.
“It’s beyond a network. It’s an ecosystem.”
By connecting with EMRs, ERPs, and every link in the healthcare chain, GHX ensures its AI models are both informed by real-world variability and grounded in validated data sources.
From classical AI to agentic AI: A new era in healthcare
Archie Mayani makes an important distinction between classical AI and agentic AI in healthcare. For decades, classical AI and machine learning have supported clinical decision-making, especially in diagnostics and risk stratification. These systems helped:
Identify patients with complex comorbidities
Prioritize care for those most at risk
Power early diagnostic tools such as mammography screenings
“We’ve always leveraged classical AI in healthcare… but agentic AI is different.”
Unlike classical models that deliver discrete outputs, agentic AI focuses on workflows. It has the potential to abstract, automate, and optimize full processes, making it uniquely suited to address the growing pressures in modern healthcare.
Solving systemic challenges with agentic AI
Mayani highlights the crisis of capacity in today’s healthcare systems, particularly in the U.S.:
Staff shortages across both clinical and back-office roles
Rising operational costs
Fewer trained physicians are available on the floor
In this context, agentic AI emerges as a co-pilot. It supports overburdened staff by automating routine tasks, connecting data points, and offering intelligent recommendations that extend beyond the exam room.
One of the most compelling examples Mayani shares involves a patient with recurring asthma arriving at the emergency department. Traditionally, treatment would focus on the immediate clinical issue. But agentic AI can see the bigger picture:
It identifies that the patient lives near a pollution site
Notes missed follow-ups due to lack of transportation
Recognizes socioeconomic factors contributing to the recurring condition
With this information, the healthcare team can address the root cause, not just the symptom. This turns reactive treatment into proactive, preventative care, reducing waste and improving outcomes.
“Now you’re not treating a condition. You’re addressing a root cause.”
This approach is rooted in the Whole Person Care model, which Mayani recalls from his earlier career. While that model once relied on community health workers stitching together fragmented records, today’s agentic AI can do the same work; faster, more reliably, and at scale.
Agentic AI as a member of the care team
Ultimately, Mayani envisions agentic AI as a full-fledged member of the care team, one capable of:
Intervening earlier in a patient’s health journey
Coordinating care across departments and disciplines
Understanding and integrating the social determinants of health
Delivering on the promise of Whole Person Care
This marks a paradigm shift, from episodic, condition-focused care to integrated, data-driven, human-centered healing.
One of the most transformative promises of agentic AI in healthcare is its ability to identify root causes faster, significantly reducing both costs and systemic waste. As Anthony notes, the delay in getting to a solution often drives up costs unnecessarily, and Mayani agrees.
“Prevention is better than cure… and right now, as we are fighting costs and waste, it hasn't been truer than any other time before.”
Agentic AI enables care teams to move from reactive service delivery to proactive problem-solving, aligning healthcare with long-promised, but rarely achieved, goals like holistic and whole-person care. The way Mayani describes it, this is now a practical, scalable reality.
Want more?
You can read the full version of this article on the AI Accelerator Institute website and listen to the complete podcast episode here.