https://lnkd.in/dWatdZcx From the article, "OpenAI Makes a Play for Healthcare": "When using AI, no matter how informed we may be about a topic, people tend to value the model’s recommendations over their own beliefs. This bias is made even more dangerous by the fact that AI is inherently a black box: we have no idea why or how it gets to the conclusions it does, making it harder to understand where the reasoning could have gone wrong and whether you should trust the model."
"OpenAI's AI in Healthcare: A Double-Edged Sword"
More Relevant Posts
-
🩺 **OpenAI is making a bold move into healthcare AI.** With strategic hires from Doximity and Instagram, the company is building out a team dedicated to developing tools for both clinicians and consumers. The launch of HealthBench—a healthcare-focused AI evaluation standard—and GPT-5’s advanced health capabilities show how AI is moving beyond diagnosis to real-world support. This is about making healthcare more accessible, more transparent, and more patient-centric. Read the full story → https://hubs.li/Q03GDTMd0 #AI #HealthcareInnovation #HealthTech #OpenAI #GPT5 #DigitalHealth
To view or add a comment, sign in
-
The future of healthcare will be co-written by AI and the messaging around it will matter just as much as the models: https://lnkd.in/e-5FQ-EC Here’s what’s on my mind: - How will patients react to AI-enabled care? - Who controls the narrative around risk, bias, and equity? - How do we communicate AI's value without overhyping its capabilities? What’s your take on OpenAI’s move?
To view or add a comment, sign in
-
OpenAI is moving from powering others’ tools to building its own healthcare applications. With GPT-5’s strong performance on clinical benchmarks and the release of HealthBench, the company is positioning healthcare as a core pillar of its AI strategy. This marks a shift toward end-to-end solutions for clinicians and consumers, entering a space already shaped by Microsoft, Palantir, Abridge, and others. OpenAI’s direct entry signals the next stage of competition and innovation in healthcare AI. https://archive.ph/QIYjw
To view or add a comment, sign in
-
🧬 Clinical data registries hold massive potential — and AI can help unlock it. Nilesh Chandra, Steve Carnall, Erik Moen, and Nori Horvitz outline a guided approach to applying AI in registries, ensuring accuracy, scalability, and meaningful insights. With the right guardrails, AI can transform registries from static repositories into engines of discovery and care improvement. 🚀 The future of registries isn’t just storage — it’s intelligence in action. Read more: https://lnkd.in/dNzJYCYf #HealthDataManagement #AIinHealthcare #ClinicalData #HealthInnovation
To view or add a comment, sign in
-
🧬 Clinical data registries hold massive potential — and AI can help unlock it. Nilesh Chandra, Steve Carnall, Erik Moen, and Nori Horvitz outline a guided approach to applying AI in registries, ensuring accuracy, scalability, and meaningful insights. With the right guardrails, AI can transform registries from static repositories into engines of discovery and care improvement. 🚀 The future of registries isn’t just storage — it’s intelligence in action. Read more: https://lnkd.in/dNzJYCYf #HealthDataManagement #AIinHealthcare #ClinicalData #HealthInnovation
To view or add a comment, sign in
-
How safe and reliable is AI in healthcare? With GPT-5, OpenAI takes a major step forward. Sam Altman describes it as “talking to an expert.” The model halves error rates, shows more empathy, and redirects users more often to professional care. With multimodal medical reasoning, from lab results to imaging, GPT-5 even outperforms many human experts. Harvard Business Review also highlights how users increasingly turn to AI for therapy and support. GPT-5 does not replace clinicians, but it can become a safer and more supportive partner in diagnostics and care pathways. Just three years after ChatGPT’s debut, this marks a shift from experimental tool to trusted companion. #healthcare #AI #genAI #OpenAI #digitalhealth
To view or add a comment, sign in
-
From pilot to scale: Making agentic AI work in health care This article provides insightful reflections on the progress of advanced AI systems over the past two decades. The author compares the under-delivery of expert systems, despite massive investments during the 'AI Winter', to the tremendous advancements made recently in large language models
To view or add a comment, sign in
-
"AI in Healthcare: Digging in the Wrong Spots" [Overseas friends: how much of this is true in your country?] At the HL7 Annual Meeting I just saw what's *easily* the most useful talk about AI in healthcare I've ever seen. It was by John Zimmerman of Carnegie Mellon and focused (FAR more than most of LinkedIn) on which uses actually GET anywhere, which uses CAN actually achieve a useful outcome. That focus is diametrically opposed to the wet-dream culture of seeking unicorns - and PATIENTS need y'all to achieve something USEFUL. Bottom line is to stop looking at the hardest, most amazing things, even though they're fascinating. While listening intently I could only capture a small fraction of points to share here but I hope to get much more from him. A few examples, not prioritized, shown in the photo composite I cobbled together: 1. Despite our imagination that AI will figure out everything from EMR access, IT'S HUGELY DIFFICULT, because the freaking data is heavily skewed, e.g. to whatever is billable. So, although sepsis is a huge killer, IT'S OFTEN NOT NOTED in ICU charts, because it's not billable(!!). 2. He cited Cassie Kozyrkov who has a great YouTube course on "making friends with machine learning - she says to find a really practical application for AI, you need to think of it as an island full of drunk people :-) Eager and friendly and usually pretty good but REALLY likely to make mistakes. So think: what kind of tasks can you give them? (I'm again reminded of the "trust but verify" rule from our paper at Division of Clinical Informatics DCI at BIDMC) 3. A taxonomy of 40 *commercially successful* AI products, segmented by : - How perfect does it need to be, for success? (Y axis) - How perfect *is* AI at the task? (X axis) LOOK: 25 of the 40 are in the left column, where the AI is moderately good at it but not brilliant! So think: what applications can you find where it'd be valuable to be wicked fast and PRETTY smart but not perfect? He also had a photo of a grizzly bear and cited the 2022 study that said 6% of people think they could win a fight with a grizzly ... and he said he's pretty sure all of them are AI product managers :) :) I'm going to take this thinking into the rest of the meeting week and beyond: what CAN we do with genAI, practically, without seeking perfection? Grace Cordovano, PhD, BCPA, Grace Vinton, Liz Salmi, Danny Sands, MD, MPH, Brian Ahier, Jan Oldenburg, Kim Whittemore, Anna McCollister, Amy Price MS, MA, MS, DPhil, James Cummings, Daniel Kraft, MD, Matthew Holt
To view or add a comment, sign in
-
-
This is a great observation of AI for medical use and reminds me of a podcast I listen to the other day that said AI is built to please the user. So it will try to gauge what you want and give it to you even if it’s not correct. Humans need to still be heavily involved and verify. Don’t take results as fact. Even the if AI tells you how smart and handsome you are.
"AI in Healthcare: Digging in the Wrong Spots" [Overseas friends: how much of this is true in your country?] At the HL7 Annual Meeting I just saw what's *easily* the most useful talk about AI in healthcare I've ever seen. It was by John Zimmerman of Carnegie Mellon and focused (FAR more than most of LinkedIn) on which uses actually GET anywhere, which uses CAN actually achieve a useful outcome. That focus is diametrically opposed to the wet-dream culture of seeking unicorns - and PATIENTS need y'all to achieve something USEFUL. Bottom line is to stop looking at the hardest, most amazing things, even though they're fascinating. While listening intently I could only capture a small fraction of points to share here but I hope to get much more from him. A few examples, not prioritized, shown in the photo composite I cobbled together: 1. Despite our imagination that AI will figure out everything from EMR access, IT'S HUGELY DIFFICULT, because the freaking data is heavily skewed, e.g. to whatever is billable. So, although sepsis is a huge killer, IT'S OFTEN NOT NOTED in ICU charts, because it's not billable(!!). 2. He cited Cassie Kozyrkov who has a great YouTube course on "making friends with machine learning - she says to find a really practical application for AI, you need to think of it as an island full of drunk people :-) Eager and friendly and usually pretty good but REALLY likely to make mistakes. So think: what kind of tasks can you give them? (I'm again reminded of the "trust but verify" rule from our paper at Division of Clinical Informatics DCI at BIDMC) 3. A taxonomy of 40 *commercially successful* AI products, segmented by : - How perfect does it need to be, for success? (Y axis) - How perfect *is* AI at the task? (X axis) LOOK: 25 of the 40 are in the left column, where the AI is moderately good at it but not brilliant! So think: what applications can you find where it'd be valuable to be wicked fast and PRETTY smart but not perfect? He also had a photo of a grizzly bear and cited the 2022 study that said 6% of people think they could win a fight with a grizzly ... and he said he's pretty sure all of them are AI product managers :) :) I'm going to take this thinking into the rest of the meeting week and beyond: what CAN we do with genAI, practically, without seeking perfection? Grace Cordovano, PhD, BCPA, Grace Vinton, Liz Salmi, Danny Sands, MD, MPH, Brian Ahier, Jan Oldenburg, Kim Whittemore, Anna McCollister, Amy Price MS, MA, MS, DPhil, James Cummings, Daniel Kraft, MD, Matthew Holt
To view or add a comment, sign in
-
-
Today's "Thought Leader Series" post focuses on a new AI tool developed by researchers at Mount Sinai's Icahn School of Medicine. The tool, known as AEquity, is designed to find and reduce biases in datasets used to train machine learning models – helping boost the accuracy and equity of AI-enabled decision-making. Learn more here: https://bit.ly/3Iir1hl Looking to incorporate AI into your business plans? Reach out to a Trexin Consulting Advisor today: https://bit.ly/3IGNKUg #AI #ArtificialIntelligence #Data #ML #MachineLearning #Healthcare #Consulting
To view or add a comment, sign in
Providing HIPAA Compliance solutions for health tech leaders. Solving what software can’t.
4wI think this is a natural human trait, to believe what anyone else says over their own feelings. Its a symptom of not trusting\believing in yourself. Social media has amplified it as well.