Patients worldwide are cautiously optimistic about the use of AI in healthcare. Most support it as a helpful assistant, but few trust it to replace doctors, according to a new study that reveals trust, concerns, and the need for explainable AI. There has been plenty of research into the growing use of AI in medicine and how medical professionals feel about it. But there are far fewer studies into how patients, who are arguably the most significant stakeholders, feel about the use of medical AI. Continue reading: https://lnkd.in/g9MCdDE8 1) #AIinHealthcare 2) #PatientPerspectives 3) #TrustVsConcern 4) #ExplainableAI
Patients cautiously optimistic about AI in healthcare, study shows
More Relevant Posts
-
AI is everywhere in healthcare but patient trust will determine its real impact. If we’re transparent about how it works, keep clinicians in the loop, and prove results, AI can enhance care for everyone. How do you think healthcare can earn patients’ trust in AI? https://lnkd.in/eSAfyTrb
To view or add a comment, sign in
-
This study from University of Munich shows where we are already regarding comfort and confidence when #AI is used in healthcare. This is only the beginning because as AI becomes better, and more people use them, confidence will increase. https://lnkd.in/gDTtH44c
To view or add a comment, sign in
-
How Artificial Intelligence is Transforming Medical Practice in France Artificial intelligence (AI) is gradually revolutionizing medicine in France, by transforming not only diagnostic tools but also the patient-doctor relationship and hospital processes. Through powerful algorithms capable of analyzing massive volumes of data, AI improves the accuracy of diagnoses, anticipates certain pathologies, and optimizes care pathways. This evolution opens new perspectives for healthcare professionals, while posing significant ethical and practical challenges. https://lnkd.in/e9iAAN-D
To view or add a comment, sign in
-
This study is a great reminder of something that's far too easy to overlook: Patients don’t experience AI in terms of "potential" like many researchers do. They experience it in the context of their own health. When you’re well, AI may feel like an interesting supplement. When you aren't, however, trust and human connection become far more important. To me, this shows that generally speaking, patients don't inherently fear technology, but they don't fully trust it either. We owe it to patients to make sure AI adoption in medicine is built around explainability, empathy, and a physician-led model.
To view or add a comment, sign in
-
An international research collaboration effort led by Chinese scientists has unveiled a vision-based foundation model that promises to transform eye care worldwide. #ChinaSeen It has demonstrated, through an international clinical trial, how artificial intelligence (AI) will soon be able to assist doctors in settings ranging from primary care clinics to specialist centers. https://lnkd.in/grN86cue
To view or add a comment, sign in
-
AI vs doctors: The wrong question for healthcare’s future? Bold claims from industry often suggest AI will replace doctors. But medicine is far more complex than any controlled vignette or AI algorithm can capture. Real clinical practice involves uncertainty, collaboration, judgement, and above all, trust between doctor and patient… qualities that cannot be replicated by AI algorithms. The evidence consistently shows that hybrid models, where clinicians and AI work together, outperform either alone. The real opportunity lies in partnership! AI systems co-designed with clinicians, supported by strong regulation, and integrated into workflows to reduce inefficiency, improve safety, and deliver more personalised care. The question isn’t whether AI can replace doctors, it can’t and it shouldn’t. The challenge is how industry, regulators, and clinicians can work together to ensure AI makes good doctors even better. Co-Authored with Dr Mary Madden & Dr Remi Paramsothy Responsible Ai UK Guy's and St Thomas' NHS Foundation Trust #AIinHealthcare #DigitalHealth #FutureOfMedicine #MedicalAI #HealthcareInnovation
To view or add a comment, sign in
-
An article by Haiyang Yang, Tinglong Dai and colleagues recently published in npj Digital Medicine examines how clinicians perceive peers who use generative AI in medical decision-making. The study highlights a critical dimension often overlooked in technology rollouts: the human factor in change management. Deploying AI in healthcare is not only a technical integration — it is a change management process that requires attention to trust, professional identity, and peer perception. Findings suggest that perceptions of clinicians using AI, particularly as a primary decision-making tool, can be less favorable than those of peers who do not use it. Even when AI is framed as a verification aid, perceptions improve but do not fully align with the control group. As AI capabilities advance, perceptions will need to evolve to ensure that adoption decisions are guided by both evidence of effectiveness and an understanding of the human dynamics that influence acceptance. Read More : https://lnkd.in/eDPKeZq2 Subscribe to the AI and Healthcare Substack to stay informed about the latest developments at the intersection of AI and medicine: https://lnkd.in/ezwGyhyd #AI #MedicalAI #HealthAI
To view or add a comment, sign in
-
From improving the patient experience to accelerating drug discovery, AI is already starting to transform medicine worldwide. Read our latest blog spotlighting key themes from Peter Lee's "The AI Revolution in Medicine, Revisited":
To view or add a comment, sign in
-
⚡ False Intelligence, Part IV: The Wrong Incentives In medical AI, the problem isn’t just the wrong endpoints. It’s the wrong incentives. We reward the illusion of progress. 📈 Research incentives Most AI studies optimize for technical endpoints: did the model detect the nodule, classify the lesion, or match the radiologist? That’s publishable science. But is it usable medicine? Two recent reviews say no. - Fukumoto et al.: lung nodule detection had just 24.2% PPV and 7 false positives per scan, mostly perifissural or perivascular nodules of little consequence. - Paramasamy et al.: only 38.1% accuracy for part-solid nodules—the most clinically important. Reviews agree: high false-positive rates, limited generalizability, and a persistent gap between technical performance and real-world benefit. ⚠️ The result: papers filled with models that look impressive in development but collapse in deployment. 🏥 Hospital incentives Hospitals adopt AI expecting efficiency. Instead, radiologists often face increased workload. Every false positive demands another read, another scan, another explanation to the patient. Instead of reducing fatigue, AI can multiply cognitive burden. False alarms don’t just waste system resources—they also drive unnecessary follow-ups that erode patient trust and delay real care. Hospitals can publish press releases about “AI-powered” innovation, but if frontline clinicians are spending more time correcting algorithms than treating patients, the outcome is a net loss. 💰 Startup incentives VC money flows to what scales and clears regulation fast. That means optimizing for minimum viable approval—not maximum clinical impact. Companies succeed when they raise the next round, not when patients live longer or recover better. ⚖️ Regulatory incentives It is far easier to validate AI against labels (was the nodule detected? was the scan classified correctly?) than against lives. Regulators provide a clear path for performance benchmarks but only a vague one for outcome impact. 🔻 The result is a system where: - Researchers chase elegant metrics, - Startups chase funding milestones, - Hospitals chase prestige, and - Regulators chase safety checkboxes. And in this chase, the patient is lost. The incentives align beautifully for everyone—except the only person who truly matters. This is why so much of medical AI looks good on paper but fails in practice. Not because people are malicious, but because the system rewards the wrong victories. Until we flip the incentives to center patient outcomes—not just technical accuracy—medical AI will keep producing more papers, more press, and more false intelligence. The next era of medical AI must not just ask: “Does it work?” It must ask: “Does it matter?” #FalseIntelligence #MedicalAI #Radiology #HealthTech #PatientCenteredCare Source: - Fukumoto et al. 2025. doi:10.1007/s11604-024-01704-2 - Paramasamy et al. 2025. doi:10.1007/s00330-024-10969-0
To view or add a comment, sign in
-
Interesting article on AI and “patient in the loop”. “Involving patients in the design and development of AI systems can play a pivotal role in making these technologies more acceptable and aligned with patient values. This approach reduces the risk of a mismatch between the capabilities of deployed solutions and the expectations of patients, ensuring that the systems are tailored to clinical practice needs.” https://lnkd.in/egrGQRAE
To view or add a comment, sign in