Frameworks for AI Evaluation and Governance
Session Spotlight: September 26, 2025 | Harvard University & Beth Israel Deaconess Medical Center Part of the “Signal Through The Noise: What Works, What Lasts, and What Matters in Healthcare AI” Conference hosted by the Division of Clinical Informatics at BIDMC
As artificial intelligence becomes increasingly embedded across healthcare systems, the demand for trustworthy frameworks to evaluate clinical and operational impact has never been greater. This session brings together national experts working at the front lines of AI evaluation, implementation, and governance.
Moderator: Yuri Quintana, PhD, FACMI, FIAHSI , Chief, Division of Clinical Informatics, BIDMC; Assistant Professor of Medicine, Harvard Medical School
Featured Speakers: Steve Labkoff, MD , Vice President, Medical Analytics, Bristol Myers Squibb; Collaborating Scientist, BIDMC Topic: Frameworks for Evaluating AI in Clinical Decision Support and Real-World Evidence Systems: DCI Network Frameworks
Leon Rozenblit , JD, PhD, Executive Director, Q.E.D. Institute; Lecturer, Yale School of Management; Collaborating Scientist, BIDMC Topic: Consumer Health and AI Governance Models: DCI Network Frameworks
Laura Adams , Senior Advisor, National Academy of Medicine Topic: National Academy of Medicine’s AI Code of Conduct: Building Trust Across Stakeholders
Jason Johnson , PhD, Chief Data & Analytics Officer, Dana-Farber Cancer Institute Topic: AI Evaluation and Governance from the Cancer AI Alliance: Technological and Strategic Insights
Brian Anderson, MD , MD, President & CEO, Coalition for Health AI Topic: Coalition’s Leadership Role in National Standards for AI Trustworthiness and Transparency
Why It Matters:
This panel will explore how institutional, national, and coalition-led frameworks are defining what it means for AI to be effective, equitable, and enduring in healthcare.
Topics will include:
• How to assess fairness, explainability, and reproducibility in AI systems • Approaches to real-world validation in CDS and patient-facing applications
• Building governance structures that scale responsibly
• Defining value across patient, institutional, and industry perspectives
This session takes place on Day 2 of the Signal Through the Noise conference—a gathering focused on identifying AI innovations that deliver meaningful, lasting value in healthcare.
About the Conference:
Signal Through the Noise: What Works, What Lasts, and What Matters in Healthcare AI is a three-day hybrid conference hosted by the Division of Clinical Informatics at Beth Israel Deaconess Medical Center and Harvard Medical School.
The event brings together a diverse community of leaders across healthcare, life sciences, technology, policy, and patient advocacy to examine which AI systems are delivering real-world clinical, operational, and societal value—and which are falling short.
As the healthcare sector grapples with rapid AI adoption, this conference provides a critical forum for defining metrics of success, evaluating ethical and regulatory frameworks, and designing AI systems that are transparent, trustworthy, and patient-centered.
Attendees will include hospital and health system executives, digital health innovators, AI researchers, clinicians, patient advocates, data scientists, regulators, and policymakers—all united by the goal of advancing responsible, evidence-based AI integration in healthcare.
Learn more and register for the conference at https://www.dcinetwork.org/aiconf25
Call for Presentations: Share What Works in AI for Healthcare
We invite thought leaders, researchers, clinicians, innovators, and patient advocates to submit proposals for presentations at Signal Through the Noise: What Works, What Lasts, and What Matters in Healthcare AI, taking place September 25–27, 2025.
The call for presentations is open from July 14 through August 17, 2025, with acceptances announced by August 20, 2025.
We are especially seeking presentations that go beyond theoretical promises to showcase real-world applications of AI that have demonstrated meaningful outcomes—clinically, operationally, ethically, or societally.
Submissions should articulate a clear problem statement, describe the innovation being tested, and share results, lessons learned, and takeaways.
We encourage presenters to highlight patient engagement, open-source approaches, transparent architectures, and deployment in trusted health systems or consortia.
Suggested elements for your proposal include:
• How you identified the problem and defined success
• The role of patients or patient advocacy groups in your work • Evidence of improved outcomes or patient experiences
• Details of your evaluation strategy and key data sources • Any early efficacy signals, testimonials, or real-world deployments
• Use of analytics, methodologies, and links to publications or demos
• Practical tools or insights attendees can apply in their own settings
All presenters must disclose potential conflicts of interest (COIs), including financial, personal, or professional affiliations with companies or technologies being discussed. Full transparency is essential to ensure unbiased and ethical evaluation of AI technologies that impact patient care.
Submit your proposal and learn more: https://www.dcinetwork.org/aiconf25#call-for-presentations
Let your work help shape the future of responsible AI in healthcare.
#HealthcareAI #AIFrameworks #ClinicalInformatics #NAM #CoalitionForHealthAI #DCINetwork #DigitalHealth #HealthEquity #AITrustworthiness #HealthIT #BethIsraelDeaconess #HarvardMedicine #AIethics #CAIA #PatientCentricDesign #AIEvaluation