An article by Haiyang Yang, Tinglong Dai and colleagues recently published in npj Digital Medicine examines how clinicians perceive peers who use generative AI in medical decision-making. The study highlights a critical dimension often overlooked in technology rollouts: the human factor in change management. Deploying AI in healthcare is not only a technical integration — it is a change management process that requires attention to trust, professional identity, and peer perception. Findings suggest that perceptions of clinicians using AI, particularly as a primary decision-making tool, can be less favorable than those of peers who do not use it. Even when AI is framed as a verification aid, perceptions improve but do not fully align with the control group. As AI capabilities advance, perceptions will need to evolve to ensure that adoption decisions are guided by both evidence of effectiveness and an understanding of the human dynamics that influence acceptance. Read More : https://lnkd.in/eDPKeZq2 Subscribe to the AI and Healthcare Substack to stay informed about the latest developments at the intersection of AI and medicine: https://lnkd.in/ezwGyhyd #AI #MedicalAI #HealthAI
AI and Healthcare’s Post
More Relevant Posts
-
A new studies looked at medical professionals and their use of artificial intelligence. It found that physicians perceived colleagues who relied on generative AI as less skilled, less trustworthy, and less empathetic. Typically, medicine has been slow to embrace information technology. This suggests the challenge with artificial intelligence is not only about integration, but also about trust. https://lnkd.in/g92ujDB7
To view or add a comment, sign in
-
How Artificial Intelligence is Transforming Medical Practice in France Artificial intelligence (AI) is gradually revolutionizing medicine in France, by transforming not only diagnostic tools but also the patient-doctor relationship and hospital processes. Through powerful algorithms capable of analyzing massive volumes of data, AI improves the accuracy of diagnoses, anticipates certain pathologies, and optimizes care pathways. This evolution opens new perspectives for healthcare professionals, while posing significant ethical and practical challenges. https://lnkd.in/e9iAAN-D
To view or add a comment, sign in
-
Interesting article on AI and “patient in the loop”. “Involving patients in the design and development of AI systems can play a pivotal role in making these technologies more acceptable and aligned with patient values. This approach reduces the risk of a mismatch between the capabilities of deployed solutions and the expectations of patients, ensuring that the systems are tailored to clinical practice needs.” https://lnkd.in/egrGQRAE
To view or add a comment, sign in
-
This study from University of Munich shows where we are already regarding comfort and confidence when #AI is used in healthcare. This is only the beginning because as AI becomes better, and more people use them, confidence will increase. https://lnkd.in/gDTtH44c
To view or add a comment, sign in
-
🤔 Here is some food for thought, from this article that I just accessed from JAMA Network Open. It refers to patient and public perceptions of clinicians who leverage #AI. Does reliance on AI enhance trust, or quietly erode it? The findings here suggest that whether for administrative, diagnostic, or therapeutic purposes, physicians who disclose use of AI are perceived as "slightly less competent, trustworthy, and empathetic" and are less likely to be selected by patients compared to those who do not mention AI use. #Patients #HealthcareInnovation #HealthInformatics #ClinicalAI #TrustInTech #MedTech #DigitalHealth #Healthcare #JAMA https://lnkd.in/eMmgNwC7
To view or add a comment, sign in
-
AI is everywhere in healthcare but patient trust will determine its real impact. If we’re transparent about how it works, keep clinicians in the loop, and prove results, AI can enhance care for everyone. How do you think healthcare can earn patients’ trust in AI? https://lnkd.in/eSAfyTrb
To view or add a comment, sign in
-
Patients worldwide are cautiously optimistic about the use of AI in healthcare. Most support it as a helpful assistant, but few trust it to replace doctors, according to a new study that reveals trust, concerns, and the need for explainable AI. There has been plenty of research into the growing use of AI in medicine and how medical professionals feel about it. But there are far fewer studies into how patients, who are arguably the most significant stakeholders, feel about the use of medical AI. Continue reading: https://lnkd.in/g9MCdDE8 1) #AIinHealthcare 2) #PatientPerspectives 3) #TrustVsConcern 4) #ExplainableAI
To view or add a comment, sign in
-
What are patients attitudes to using medical AI? There is a lot of information on what doctors think, but what do patients think about medical AI and how do these attitudes vary across demographics? Due to economic pressure patient trust in AI is considered vital, but a notable knowledge gap has been identified regarding patient attitudes, particularly on a large, international scale. Despite the rapid growth, as previously discussed the implications of AI applications to patient care are not always clear. With huge economic pressure patient "acceptance" will be considered important for the sustainable adoption of AI. This means that patients may not always have the opportunity to consent to its use. A new study including 14,000 patients explores patient perspectives and reports that, "Patients whose health is directly affected by AI, either through improved treatment and diagnosis or by potential consequences of immature AI, may hold views that diverge substantially from those of clinicians". https://lnkd.in/eGawy7-Z
To view or add a comment, sign in
-
How do we ensure AI in medicine actually helps patients, instead of just looking good on paper? The era of celebrating predictive accuracy for its own sake is closing. The Journal of the American College of Cardiology (JACC) is redefining the gold standard for AI in cardiovascular care, shifting the focus from model performance to tangible, real-world clinical impact. This is more than an academic guideline—it's a crucial roadmap for meaningful innovation. It challenges researchers and developers to bridge the vast gap between data science and daily clinical practice, ensuring our most advanced tools are built to be useful and safe at the point of care. 1. An AI model's value is now measured by the critical clinical need it solves, not by its performance metrics in isolation. 2. The focus is shifting from initial development (Stage 1) to proving feasibility, impact, and long-term viability in real-world settings (Stages 2-4). 3. Evaluation must be fit-for-purpose. Metrics must be justified by the specific clinical consequences of a model's predictions, moving beyond a simple reliance on AUROC. 4. Trust is foundational. Reproducibility, transparency in methodology, and the use of public benchmarks are now essential components of credible AI research. What do you believe is the single biggest barrier to implementing AI effectively in clinical workflows today? The message is clear: the future of medical AI will be shaped by those who translate algorithmic potential into measurable improvements in patient care. An AI model without a clear and practical path to clinical implementation is no longer an innovation; it's a distraction. Source: DOI: 10.1016/j.jacc.2025.07.040
To view or add a comment, sign in
-
-
Artificial intelligence (AI) is transforming healthcare by enhancing diagnostics, personalizing medicine and improving surgical precision. However, its integration into healthcare systems raises significant ethical and legal challenges. This review explores key ethical principles—autonomy, beneficence, non-maleficence, justice, transparency and accountability—highlighting their relevance in AI-driven decision-making. Legal challenges, including data privacy and security, liability for AI errors, regulatory approval processes, intellectual property and cross-border regulations, are also addressed. As AI systems become increasingly autonomous, questions of responsibility and fairness must be carefully considered, particularly with the potential for biased algorithms to amplify healthcare disparities. This paper underscores the importance of multi-disciplinary collaboration between technologists, healthcare providers, legal experts and policymakers to create adaptive, globally harmonized frameworks. Public engagement is emphasized as essential for fostering trust and ensuring ethical AI adoption. With AI technologies advancing rapidly, a flexible regulatory environment that evolves with innovation is critical. Aligning AI innovation with ethical and legal imperatives will lead to a safer, more equitable healthcare system for all.
To view or add a comment, sign in