How to develop AI policies that work for your organization’s needs What to include in an AI policy: The AMA toolkit includes a model AI policy document that health care organizations can download and modify to align with their existing governance structure, roles, responsibilities and processes. Health systems should also be aware of state and federal laws that could impact AI’s use. At minimum, an organization’s AI policy should articulate: ✅ Definitions for relevant terms such as generative AI and machine learning. ✅ AI risks, including the risks associated with a lack of transparency, patient safety and data privacy and security. ✅ Permitted uses of approved and publicly available AI tools. This includes describing permitted-use cases, such as using AI for developing drafts of marking materials and research summaries. ✅ Prohibited AI uses, such as entering patients’ personal health information into publicly available AI tools. ✅ Permitted uses of approved AI tools, such as requirements and guidelines that all team members should follow when using AI tools. ✅ Governance, evaluation and approval processes for AI tools. ✅ Description of AI accountability and oversight, including risk assessment and regulatory compliance. ✅ Policy for how long AI generated information and a patient visit recordings will be retained. ✅ Transparency, including guidelines on when and how clinicians and patients should be made aware that AI is being used. ✅ Training, for example, incorporating AI training in the annual and ad hoc training program for everyone in the organization who uses AI. https://lnkd.in/dQJEb8nK
Steven Dodsworth’s Post
More Relevant Posts
-
"AI in Healthcare: Digging in the Wrong Spots" [Overseas friends: how much of this is true in your country?] At the HL7 Annual Meeting I just saw what's *easily* the most useful talk about AI in healthcare I've ever seen. It was by John Zimmerman of Carnegie Mellon and focused (FAR more than most of LinkedIn) on which uses actually GET anywhere, which uses CAN actually achieve a useful outcome. That focus is diametrically opposed to the wet-dream culture of seeking unicorns - and PATIENTS need y'all to achieve something USEFUL. Bottom line is to stop looking at the hardest, most amazing things, even though they're fascinating. While listening intently I could only capture a small fraction of points to share here but I hope to get much more from him. A few examples, not prioritized, shown in the photo composite I cobbled together: 1. Despite our imagination that AI will figure out everything from EMR access, IT'S HUGELY DIFFICULT, because the freaking data is heavily skewed, e.g. to whatever is billable. So, although sepsis is a huge killer, IT'S OFTEN NOT NOTED in ICU charts, because it's not billable(!!). 2. He cited Cassie Kozyrkov who has a great YouTube course on "making friends with machine learning - she says to find a really practical application for AI, you need to think of it as an island full of drunk people :-) Eager and friendly and usually pretty good but REALLY likely to make mistakes. So think: what kind of tasks can you give them? (I'm again reminded of the "trust but verify" rule from our paper at Division of Clinical Informatics DCI at BIDMC) 3. A taxonomy of 40 *commercially successful* AI products, segmented by : - How perfect does it need to be, for success? (Y axis) - How perfect *is* AI at the task? (X axis) LOOK: 25 of the 40 are in the left column, where the AI is moderately good at it but not brilliant! So think: what applications can you find where it'd be valuable to be wicked fast and PRETTY smart but not perfect? He also had a photo of a grizzly bear and cited the 2022 study that said 6% of people think they could win a fight with a grizzly ... and he said he's pretty sure all of them are AI product managers :) :) I'm going to take this thinking into the rest of the meeting week and beyond: what CAN we do with genAI, practically, without seeking perfection? Grace Cordovano, PhD, BCPA, Grace Vinton, Liz Salmi, Danny Sands, MD, MPH, Brian Ahier, Jan Oldenburg, Kim Whittemore, Anna McCollister, Amy Price MS, MA, MS, DPhil, James Cummings, Daniel Kraft, MD, Matthew Holt
To view or add a comment, sign in
-
-
Tech Leaders, make sure your data is "AI Ready" "Without clean, consistent, and connected data, AI models will struggle to produce accurate or meaningful insights." "AI readiness is not a one-time project. It is an ongoing commitment to ensuring that data is accurate, complete, contextually relevant, and available in real time." #AI #data #artificialintelligence Healthcare Business Today https://lnkd.in/ev22hCHu
To view or add a comment, sign in
-
🩺 **OpenAI is making a bold move into healthcare AI.** With strategic hires from Doximity and Instagram, the company is building out a team dedicated to developing tools for both clinicians and consumers. The launch of HealthBench—a healthcare-focused AI evaluation standard—and GPT-5’s advanced health capabilities show how AI is moving beyond diagnosis to real-world support. This is about making healthcare more accessible, more transparent, and more patient-centric. Read the full story → https://hubs.li/Q03GDTMd0 #AI #HealthcareInnovation #HealthTech #OpenAI #GPT5 #DigitalHealth
To view or add a comment, sign in
-
Why is healthcare lagging behind in unlocking AI’s full potential? Because integration, regulation, and data challenges are slowing innovation more than the technology itself. Before we can fully benefit from AI in healthcare, we have to confront these issues. Understanding the challenges is critical to unlocking AI’s full impact in HC. Here are some of the challenges that need to be addressed before we can fully benefit. (And yes, there are more than these.) 𝗘𝘁𝗵𝗶𝗰𝗮𝗹: - Bias AI may perpetuate or amplify biases in HC data, leading to unequal care across demographics. - Impact on Patient-Provider Relationship AI may reduce human interaction, empathy, and personalized care, potentially dehumanizing HC. - Environmental and Social Implications AI consumes large resources and energy, raising ethical questions about environmental sustainability and the social consequences of workforce displacement. 𝗧𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝗶𝗰𝗮𝗹: - Integration with Legacy Systems Difficulty in connecting AI tools with outdated IT infrastructure. - Handling Unstructured Data Large volumes of data are unstructured and hard for AI to analyze. - AI Hallucinations and Reliability Issues AI models can generate incorrect or fabricated outputs, misleading clinical decisions. 𝗠𝗲𝗱𝗶𝗰𝗮𝗹: - Increased Demand from AI-Driven Diagnostics AI-enhanced disease detection increases demand for follow-up tests and interventions, potentially overwhelming HC capacity. - Clinical Scope and Generalizability AI models may have limited applicability outside the specific clinical contexts or patient populations they were trained on. - Alignment with Local Care Practices AI systems need to be adapted to the unique workflows, protocols, and standards of care specific to each HC setting or region. 𝗥𝗲𝗴𝘂𝗹𝗮𝘁𝗼𝗿𝘆: - Need for Adaptive and Forward-Looking Regulation Current regulations lag behind AI innovation, creating gaps in oversight and compliance. - Governance of AI Use by Clinicians and Patients GenAI tools like ChatGPT are increasingly used by clinicians and patients without clear policies or training. - Liability and Accountability Who is legally responsible when AI systems cause patient harm remains unclear and complex. 𝗧𝗿𝘂𝘀𝘁: - Workforce Resistance Distrust in AI due to fears of job displacement or lack of transparency. - Transparency and Explainability Many AI tools make it difficult for clinicians and patients to understand how decisions are made. - Reliability and Performance Over Time AI models may become less accurate over time. 𝗗𝗮𝘁𝗮: - Limited Digitalization A lot of HC data is not digitized, limiting AI’s access to comprehensive information. - Data Quality and Accuracy HC data often contains errors, inconsistencies, missing values, and outdated information. - Data Privacy and Security HC data is highly sensitive, raising concerns about unauthorized access and breaches. What challenges would you add to the list?
To view or add a comment, sign in
-
-
This is such an important breakdown. 🚀 As leaders, we often view AI in healthcare as a technology problem — but the true barriers lie in mindset, systems, and trust. Integration, regulation, ethics, and data quality are not obstacles to fear, but opportunities to reimagine how we deliver care. Driving meaningful change requires: 🔹 Courage to challenge legacy systems while investing in scalable infrastructure. 🔹 Collaboration across disciplines to align regulation, innovation, and patient safety. 🔹 Commitment to transparency and trust, ensuring AI enhances — not replaces — the human connection in healthcare. Unlocking AI’s full potential isn’t just about fixing gaps; it’s about leading with vision, adaptability, and accountability. Those who embrace this change mindset will help shape a more equitable, efficient, and human-centered future of healthcare.
Why is healthcare lagging behind in unlocking AI’s full potential? Because integration, regulation, and data challenges are slowing innovation more than the technology itself. Before we can fully benefit from AI in healthcare, we have to confront these issues. Understanding the challenges is critical to unlocking AI’s full impact in HC. Here are some of the challenges that need to be addressed before we can fully benefit. (And yes, there are more than these.) 𝗘𝘁𝗵𝗶𝗰𝗮𝗹: - Bias AI may perpetuate or amplify biases in HC data, leading to unequal care across demographics. - Impact on Patient-Provider Relationship AI may reduce human interaction, empathy, and personalized care, potentially dehumanizing HC. - Environmental and Social Implications AI consumes large resources and energy, raising ethical questions about environmental sustainability and the social consequences of workforce displacement. 𝗧𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝗶𝗰𝗮𝗹: - Integration with Legacy Systems Difficulty in connecting AI tools with outdated IT infrastructure. - Handling Unstructured Data Large volumes of data are unstructured and hard for AI to analyze. - AI Hallucinations and Reliability Issues AI models can generate incorrect or fabricated outputs, misleading clinical decisions. 𝗠𝗲𝗱𝗶𝗰𝗮𝗹: - Increased Demand from AI-Driven Diagnostics AI-enhanced disease detection increases demand for follow-up tests and interventions, potentially overwhelming HC capacity. - Clinical Scope and Generalizability AI models may have limited applicability outside the specific clinical contexts or patient populations they were trained on. - Alignment with Local Care Practices AI systems need to be adapted to the unique workflows, protocols, and standards of care specific to each HC setting or region. 𝗥𝗲𝗴𝘂𝗹𝗮𝘁𝗼𝗿𝘆: - Need for Adaptive and Forward-Looking Regulation Current regulations lag behind AI innovation, creating gaps in oversight and compliance. - Governance of AI Use by Clinicians and Patients GenAI tools like ChatGPT are increasingly used by clinicians and patients without clear policies or training. - Liability and Accountability Who is legally responsible when AI systems cause patient harm remains unclear and complex. 𝗧𝗿𝘂𝘀𝘁: - Workforce Resistance Distrust in AI due to fears of job displacement or lack of transparency. - Transparency and Explainability Many AI tools make it difficult for clinicians and patients to understand how decisions are made. - Reliability and Performance Over Time AI models may become less accurate over time. 𝗗𝗮𝘁𝗮: - Limited Digitalization A lot of HC data is not digitized, limiting AI’s access to comprehensive information. - Data Quality and Accuracy HC data often contains errors, inconsistencies, missing values, and outdated information. - Data Privacy and Security HC data is highly sensitive, raising concerns about unauthorized access and breaches. What challenges would you add to the list?
To view or add a comment, sign in
-
-
We will struggle to find the sweet-spot between reckless adoption and overcautious resistance to AI use in Healthcare. Regulation especially struggles to keep pace with changes, and attempts to suppress its use risks people going off-piste either deliberately or naively. A bumpy ride ahead I fear but we must move forward and manage associated risks. It will be worth it.
Why is healthcare lagging behind in unlocking AI’s full potential? Because integration, regulation, and data challenges are slowing innovation more than the technology itself. Before we can fully benefit from AI in healthcare, we have to confront these issues. Understanding the challenges is critical to unlocking AI’s full impact in HC. Here are some of the challenges that need to be addressed before we can fully benefit. (And yes, there are more than these.) 𝗘𝘁𝗵𝗶𝗰𝗮𝗹: - Bias AI may perpetuate or amplify biases in HC data, leading to unequal care across demographics. - Impact on Patient-Provider Relationship AI may reduce human interaction, empathy, and personalized care, potentially dehumanizing HC. - Environmental and Social Implications AI consumes large resources and energy, raising ethical questions about environmental sustainability and the social consequences of workforce displacement. 𝗧𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝗶𝗰𝗮𝗹: - Integration with Legacy Systems Difficulty in connecting AI tools with outdated IT infrastructure. - Handling Unstructured Data Large volumes of data are unstructured and hard for AI to analyze. - AI Hallucinations and Reliability Issues AI models can generate incorrect or fabricated outputs, misleading clinical decisions. 𝗠𝗲𝗱𝗶𝗰𝗮𝗹: - Increased Demand from AI-Driven Diagnostics AI-enhanced disease detection increases demand for follow-up tests and interventions, potentially overwhelming HC capacity. - Clinical Scope and Generalizability AI models may have limited applicability outside the specific clinical contexts or patient populations they were trained on. - Alignment with Local Care Practices AI systems need to be adapted to the unique workflows, protocols, and standards of care specific to each HC setting or region. 𝗥𝗲𝗴𝘂𝗹𝗮𝘁𝗼𝗿𝘆: - Need for Adaptive and Forward-Looking Regulation Current regulations lag behind AI innovation, creating gaps in oversight and compliance. - Governance of AI Use by Clinicians and Patients GenAI tools like ChatGPT are increasingly used by clinicians and patients without clear policies or training. - Liability and Accountability Who is legally responsible when AI systems cause patient harm remains unclear and complex. 𝗧𝗿𝘂𝘀𝘁: - Workforce Resistance Distrust in AI due to fears of job displacement or lack of transparency. - Transparency and Explainability Many AI tools make it difficult for clinicians and patients to understand how decisions are made. - Reliability and Performance Over Time AI models may become less accurate over time. 𝗗𝗮𝘁𝗮: - Limited Digitalization A lot of HC data is not digitized, limiting AI’s access to comprehensive information. - Data Quality and Accuracy HC data often contains errors, inconsistencies, missing values, and outdated information. - Data Privacy and Security HC data is highly sensitive, raising concerns about unauthorized access and breaches. What challenges would you add to the list?
To view or add a comment, sign in
-
-
An Epic vision for using AI – but will it work in the real world? The company plans to infuse artificial intelligence throughout its applications, but larger questions remain for customers and the wider industry. https://lnkd.in/g5FCqhtj
To view or add a comment, sign in
-
This new IHI article offers guidance on implementing #AI governance to mazimize the benefits of AI but minimize harm: https://ow.ly/lqL150WTrEo
To view or add a comment, sign in
-
🚨 AI is everywhere in healthcare—but do we have guardrails? A new survey of 233 health system leaders found: ◾ 88% are already using AI in some capacity. ◾ But only 18% report having a mature governance framework. ◾ In many organizations, 10+ departments are experimenting with AI independently—without a unified oversight strategy. 👉 Current use cases include ambient listening for clinical notes, coding support, denial prediction, and even patient-facing chatbots. The value is clear—but so are the risks. Without safeguards, AI can amplify bias, hallucinations, and workflow disruptions. One chilling example: models recommending inferior psychiatric treatments based on a patient’s race. What’s needed now: 🔹 Inclusive governance—bringing together clinical, IT, finance, HR, and compliance voices. 🔹 Clear policies for data use and vendor vetting. 🔹 Adoption of toolkits like the American Medical Association’s eight-step framework and industry roadmaps from groups like the Coalition for Health AI (CHAI). Healthcare is moving fast with AI adoption, but governance is lagging. To realize AI’s promise safely, systems must prioritize guardrails as much as growth. 💬 Curious—how is your organization approaching AI governance today? Full Article: https://lnkd.in/e64akYkB #AI #SafeGuards #Governance #AMA #CHAI #Healthcare
To view or add a comment, sign in
-
https://lnkd.in/gmfe8NUm The AMA STEPS Forward® “Governance for Augmented Intelligence” toolkit, developed in collaboration with Manatt Health, is a comprehensive eight-step guide for health care systems to establish a governance framework to implement, manage and scale AI solutions. It includes a step dedicated to help organizations develop AI policies. The foundational pillars of responsible AI adoption are: Establishing executive accountability and structure. Forming a working group to detail priorities, processes and policies. Assessing current policies. Developing AI policies. Defining project intake, vendor evaluation and assessment processes. Updating standard planning and implementation processes. Establishing an oversight and monitoring process. Supporting AI organizational readiness.
To view or add a comment, sign in
Founder & CEO @ Nozomi - Creating digital health products that bring positive emotions and engagement
6dInsightful Steven Dodsworth! Thank you for sharing!