The most important aspect of designing processes that leverage human-AI symbiosis is task design and allocation. Cost aspect has been secondary, and for some industries, like healthcare, I did not consider cost (savings) at all since the objective there is well well-being of humans. Then I came across this paper. https://lnkd.in/g986Z2PN It brings the cost element in the mix as well. I think our thought processes still sync since they are not worried about cost savings, but rather the cost of errors. The authors examine how to optimally share diagnostic tasks between AI and human radiologists in mammography screening, in order to minimize costs and maintain or improve performance. They compare three strategies: Expert-alone: radiologists do all interpretations. Automation: AI alone does all interpretations. Delegation (hybrid): AI first assesses mammograms, and only those above some risk threshold are passed on to radiologists. The researchers have built an optimization model that takes into account costs like follow-ups for false positives, litigation costs for false negatives, the cost of using AI, cost of expert evaluation, prevalence of disease, and the performance (sensitivity, specificity / AUC) of both AI algorithms and human experts. Then they validate by backtesting on real mammography datasets from a crowdsourced competition. Key findings here are: Delegation (AI + human) often wins: Under many realistic parameter settings, the hybrid/delegation strategy gives the lowest expected cost compared to human-only or full automation. In their backtests, the delegation strategy reduced costs by about 17.5% to 30.1% relative to the expert-alone strategy. Critical role of disease prevalence and cost trade-offs: What strategy is optimal depends strongly on how common breast cancer is in the screened population, and how severe (in cost) false negatives are relative to false positives.For example, when prevalence is higher or litigation costs for missing a cancer are large, strategies that reduce false negatives (even at the cost of more false positives) become more favorable. Effect of algorithm and litigation costs: Lower costs for AI use and lower litigation costs tend to expand the range of conditions under which delegation is optimal. If AI is cheap and risk of false negatives (or liability) is lower, then full automation becomes more plausible. But high costs push decision-making toward human expert involvement. Performance thresholds matter: The relative performance of AI vs radiologists (e.g., AUC, true/false positive rates) determines when one strategy overtakes another. As AI performance increases, there’s a shift: from expert-alone → delegation → automation. Liability asymmetry: If AI systems are held to stricter liability (e.g., legal standard, product liability) compared to humans, this increases the cost of errors by AI and makes full automation less attractive. #artificialintelligence
Optimizing AI and human collaboration in mammography
More Relevant Posts
-
The most important aspect of designing processes that leverage human-AI symbiosis is task design and allocation. Cost aspect has been secondary, and for some industries, like healthcare, I did not consider cost (savings) at all since the objective there is well well-being of humans. Then I came across this paper. https://lnkd.in/gymS74bn It brings the cost element in the mix as well. I think our thought processes still sync since they are not worried about cost savings, but rather the cost of errors. The authors examine how to optimally share diagnostic tasks between AI and human radiologists in mammography screening, in order to minimize costs and maintain or improve performance. They compare three strategies: Expert-alone: radiologists do all interpretations. Automation: AI alone does all interpretations. Delegation (hybrid): AI first assesses mammograms, and only those above some risk threshold are passed on to radiologists. The researchers have built an optimization model that takes into account costs like follow-ups for false positives, litigation costs for false negatives, the cost of using AI, cost of expert evaluation, prevalence of disease, and the performance (sensitivity, specificity / AUC) of both AI algorithms and human experts. Then they validate by backtesting on real mammography datasets from a crowdsourced competition. Key findings here are: Delegation (AI + human) often wins: Under many realistic parameter settings, the hybrid/delegation strategy gives the lowest expected cost compared to human-only or full automation. In their backtests, the delegation strategy reduced costs by about 17.5% to 30.1% relative to the expert-alone strategy. Critical role of disease prevalence and cost trade-offs: What strategy is optimal depends strongly on how common breast cancer is in the screened population, and how severe (in cost) false negatives are relative to false positives.For example, when prevalence is higher or litigation costs for missing a cancer are large, strategies that reduce false negatives (even at the cost of more false positives) become more favorable. Effect of algorithm and litigation costs: Lower costs for AI use and lower litigation costs tend to expand the range of conditions under which delegation is optimal. If AI is cheap and risk of false negatives (or liability) is lower, then full automation becomes more plausible. But high costs push decision-making toward human expert involvement. Performance thresholds matter: The relative performance of AI vs radiologists (e.g., AUC, true/false positive rates) determines when one strategy overtakes another. As AI performance increases, there’s a shift: from expert-alone → delegation → automation. Liability asymmetry: If AI systems are held to stricter liability (e.g., legal standard, product liability) compared to humans, this increases the cost of errors by AI and makes full automation less attractive. #artificialintelligence
To view or add a comment, sign in
-
-
🚀 **AI in Healthcare: Revolutionizing Diagnostics and Patient Care** 🚀 In recent news, a groundbreaking study published in *Nature Medicine* highlights how AI algorithms can significantly enhance diagnostic accuracy in radiology. Using deep learning techniques, researchers found that AI models outperformed human radiologists in detecting various conditions, including lung cancer and fractures, with a notable increase in sensitivity and specificity. This comes at a crucial time when healthcare systems globally are under immense pressure to provide timely, accurate, and cost-effective care. **Key Insights:** 1. **Transforming Diagnostics**: The ability of AI to analyze vast amounts of data at unprecedented speed is transforming diagnostics. More accurate detection means earlier intervention and potentially better patient outcomes. 2. **Augmenting Human Expertise**: Rather than replacing radiologists, AI serves as a powerful tool that enhances their capabilities. By automating routine tasks and providing second opinions, AI allows healthcare professionals to focus more on complex cases and patient interactions. 3. **Cost Efficiency**: As healthcare faces rising costs, AI can streamline workflows and reduce the burden of manual processes. This could potentially lead to significant healthcare savings, freeing up resources for other critical areas. 4. **Ethical Considerations**: With the integration of AI in healthcare comes the responsibility of addressing ethical concerns. Ensuring transparency in algorithmic decisions and safeguarding patient data will be crucial for public trust. 5. **Future Trends**: The success of AI in radiology sets a precedent for its adoption in other specialties, such as pathology and cardiology. Expect to see an accelerated push for regulatory approvals and a focus on developing clinical guidelines for AI-assisted diagnostics. As we move further into this AI-driven era, the implications for the healthcare industry are profound. The intersection of AI and patient care not only enhances efficiencies but raises important questions about training, workforce dynamics, and ethical considerations. 💬 **I encourage my fellow professionals to share your thoughts**: How do you see AI reshaping your field? What are your views on the ethical implications of AI in healthcare? Let's foster a discussion about how we can collectively navigate these advancements while prioritizing patient care and professional integrity. #ArtificialIntelligence #Healthcare #DiagnosticAccuracy #AIinMedicine #Radiology #MachineLearning #PatientCare #EthicsInAI
To view or add a comment, sign in
-
"Why didn't AI replace radiologists yet?" I get this question 3x per week. "Computer vision is solved," I heard Lucas Beyer say once. So... how hard can it be? Didn't Google show that it could detect lung cancer better than human more than a decade ago? Then I shadowed a radiologist for a few hours. Dark room, massive screens. Click. Scan. Diagnosis in 30 seconds: "Nothing to see here." Quick dictation to Microsoft Nuance. Done. Next patient. Then he showed me 'the AI': Right click → AI menu → Load tool → Wait 10 seconds → Select "lungs" → Wait 15 seconds → Colorful circles appear on screen. Out of an abundance of caution, it overflags. Leading to more work, not less. And potentially more false positives, instead of fewer. Radiologists are already impressively optimized. They obsess about every click. They've trained their internal models ("intuition") not just on images, but on all other highly detailed contextual signals, too: patient history, what the referring doctor told them on the phone, the particular quirks of this machine, the radtech that made a small remark when walking over just now... Oh, and, don't forget: what gets reimbursed. Incentives rule the world. That means that even if we get the AI right, the UI needs to be absolutely perfect. For instance: instead of firing up your local machine when you click "AI", preprocess all the patients from the last hour. Then first bring them a batch with the ones that the model confidently sees as "no cancer", like DeepHealth does. I'm also super excited about some of the new things I've seen come up, like Aidoc or what Wouter Veldhuis is cooking - stepping away from point solutions towards end-to-end workflows. But that still requires better AI models. And there are many hard problems that won't make this easy. For instance: → Medical language is often strategically fuzzy. "Cannot rule out" → There is a long tail of rare, unusual, and complex diagnoses → No reimbursement; limited incentive → Very, very high standards for medical devices That doesn't mean AI isn't already helping radiology today. For instance, at Ahead Health, we use it to determine various imaging-derived biomarkers. Moreover, we use it to help explain things in a way that you _and_ your non-radiology doctor understands it. And at our partner MRI clinics, we've squeezed every last bit of regulatory-approved-AI for a custom protocol. More data, but fewer minutes in a loud machine. Anyway, Out-Of-Pocket did an excellent write up I loved reading. Link in comments.
To view or add a comment, sign in
-
-
Ultimately healthcare requires a human to be responsible, which is why radiologists have absolutely nothing to worry about. I find the current state of treating radiologist and pathologists (most of whom are frankly brilliant) as easily replaced technicians to be a complete delusion… there is just another layer of security that comes from a human with a limited amount of time and a lot to lose if they get it wrong. And perhaps that’s one of the biggest things that AI is completely off the mark on — the hard part (and in many cases the most important part) about being a doctor in practice is being not wrong more so than anything else. It takes a couple of years to learn the didactics and then it takes 6 to 10 more to learn the practice — learning when to not treat; learning when to not operate; judgement when something is “off” and you should dig more..
"Why didn't AI replace radiologists yet?" I get this question 3x per week. "Computer vision is solved," I heard Lucas Beyer say once. So... how hard can it be? Didn't Google show that it could detect lung cancer better than human more than a decade ago? Then I shadowed a radiologist for a few hours. Dark room, massive screens. Click. Scan. Diagnosis in 30 seconds: "Nothing to see here." Quick dictation to Microsoft Nuance. Done. Next patient. Then he showed me 'the AI': Right click → AI menu → Load tool → Wait 10 seconds → Select "lungs" → Wait 15 seconds → Colorful circles appear on screen. Out of an abundance of caution, it overflags. Leading to more work, not less. And potentially more false positives, instead of fewer. Radiologists are already impressively optimized. They obsess about every click. They've trained their internal models ("intuition") not just on images, but on all other highly detailed contextual signals, too: patient history, what the referring doctor told them on the phone, the particular quirks of this machine, the radtech that made a small remark when walking over just now... Oh, and, don't forget: what gets reimbursed. Incentives rule the world. That means that even if we get the AI right, the UI needs to be absolutely perfect. For instance: instead of firing up your local machine when you click "AI", preprocess all the patients from the last hour. Then first bring them a batch with the ones that the model confidently sees as "no cancer", like DeepHealth does. I'm also super excited about some of the new things I've seen come up, like Aidoc or what Wouter Veldhuis is cooking - stepping away from point solutions towards end-to-end workflows. But that still requires better AI models. And there are many hard problems that won't make this easy. For instance: → Medical language is often strategically fuzzy. "Cannot rule out" → There is a long tail of rare, unusual, and complex diagnoses → No reimbursement; limited incentive → Very, very high standards for medical devices That doesn't mean AI isn't already helping radiology today. For instance, at Ahead Health, we use it to determine various imaging-derived biomarkers. Moreover, we use it to help explain things in a way that you _and_ your non-radiology doctor understands it. And at our partner MRI clinics, we've squeezed every last bit of regulatory-approved-AI for a custom protocol. More data, but fewer minutes in a loud machine. Anyway, Out-Of-Pocket did an excellent write up I loved reading. Link in comments.
To view or add a comment, sign in
-
-
AI in Action: Real-Life Chest X-Ray Software for Pneumonia Detection 1. Practical Applications & Companies Making an Impact • MHC’s ChestInsight AI (UAE) MHC offers a suite of AI tools, including ChestInsight AI, which detects pneumonia, nodules, effusions, TB, and more with over 95% sensitivity, and delivers results in under 5 seconds. It’s CE-marked, FDA-cleared, MOH-approved in the UAE, and fully integrated into existing PACS/RIS systems . • GE Healthcare – Thoracic Care Suite A bundle of AI algorithms (including from Lunit INSIGHT CXR) that identifies pneumonia (e.g., from COVID-19 or TB) among other pathologies. Designed to help triage and speed up diagnosis during high-demand scenarios like pandemics . • AZmed’s AZchest Recently FDA-cleared, AZchest assists radiologists in detecting lung nodules and triaging pneumothorax and pleural effusion on chest X-rays. Clinical validation shows dramatic improvements, including a 10% increase in sensitivity (when radiologists were assisted by the AI) . • CheXNet (Research Prototype) A deep-learning model that achieved radiologist-level performance in detecting pneumonia using chest X-rays, based on the large ChestX-ray14 dataset . ———- Real-World Impact & Clinical Performance • Improved Sensitivity In a real-world study, AI assistance improved radiologists’ pneumonia detection sensitivity from 0.673 to 0.719, while maintaining specificity across other pathologies like nodules and pleural effusion . • Explainability with Heatmaps AI-generated heatmaps (like Grad-CAM) highlight suspect regions—e.g., lower lobes—helping radiologists interpret why the AI flagged certain areas .
To view or add a comment, sign in
-
-
𝗔𝗜-𝗣𝗼𝘄𝗲𝗿𝗲𝗱 𝗠𝗮𝘀𝘀 𝗦𝗲𝗴𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻 𝗶𝗻 𝗠𝗮𝗺𝗺𝗼𝗴𝗿𝗮𝗺𝘀: 𝗗𝗿𝗶𝘃𝗶𝗻𝗴 𝗔𝗰𝗰𝘂𝗿𝗮𝘁𝗲 𝗕𝗿𝗲𝗮𝘀𝘁 𝗖𝗮𝗻𝗰𝗲𝗿 𝗗𝗲𝘁𝗲𝗰𝘁𝗶𝗼𝗻 AI-driven semantic and instance segmentation is revolutionizing the way radiologists detect and evaluate masses in mammograms. By precisely highlighting suspicious regions, AI supports: ✅ 𝗘𝗮𝗿𝗹𝘆 𝗗𝗲𝘁𝗲𝗰𝘁𝗶𝗼𝗻: Identify potential malignancies at the earliest stages ✅ 𝗥𝗲𝗱𝘂𝗰𝗲𝗱 𝗙𝗮𝗹𝘀𝗲 𝗣𝗼𝘀𝗶𝘁𝗶𝘃𝗲𝘀: Minimize unnecessary biopsies and patient anxiety ✅ 𝗘𝗻𝗵𝗮𝗻𝗰𝗲𝗱 𝗖𝗼𝗻𝗳𝗶𝗱𝗲𝗻𝗰𝗲: Assist radiologists in making informed diagnostic decisions ✅ 𝗦𝘁𝗿𝗲𝗮𝗺𝗹𝗶𝗻𝗲𝗱 𝗪𝗼𝗿𝗸𝗳𝗹𝗼𝘄: Support efficient clinical evaluation and reporting At 𝗠𝗮𝗿𝘁𝗲𝗰𝗸 𝗦𝗼𝗹𝘂𝘁𝗶𝗼𝗻𝘀, we specialize in high-precision 𝗺𝗲𝗱𝗶𝗰𝗮𝗹 𝗶𝗺𝗮𝗴𝗲 𝗮𝗻𝗻𝗼𝘁𝗮𝘁𝗶𝗼𝗻 𝗮𝗻𝗱 𝗹𝗮𝗯𝗲𝗹𝗶𝗻𝗴, enabling segmentation models to perform with exceptional accuracy in breast imaging. Our expertise ensures that AI systems receive the most detailed, accurate, and reliable annotations, helping healthcare professionals deliver better patient outcomes. Connect with us to explore how our annotation and labeling services can empower your AI and medical imaging projects. #MarteckSolutions #Healthcare #AI #ML #Artificialintellegence #MachineLearning #MedicalImaging #AIinHealthcare #RadiologyAI #BreastCancerAwareness #MedicalInnovation #HealthcareAI #MassSegmentation #PrecisionDiagnostics
To view or add a comment, sign in
-
-
𝗔𝗜-𝗣𝗼𝘄𝗲𝗿𝗲𝗱 𝗕𝗿𝗲𝗮𝘀𝘁 𝗖𝗮𝗻𝗰𝗲𝗿 𝗗𝗲𝘁𝗲𝗰𝘁𝗶𝗼𝗻: 𝗣𝗿𝗲𝗰𝗶𝘀𝗶𝗼𝗻 𝗧𝗵𝗿𝗼𝘂𝗴𝗵 𝗘𝘅𝗽𝗲𝗿𝘁 𝗔𝗻𝗻𝗼𝘁𝗮𝘁𝗶𝗼𝗻 AI is transforming breast cancer detection by making mammogram analysis more accurate and reliable. At 𝗠𝗮𝗿𝘁𝗲𝗰𝗸 𝗦𝗼𝗹𝘂𝘁𝗶𝗼𝗻𝘀, we provide high-quality 𝗺𝗲𝗱𝗶𝗰𝗮𝗹 𝗶𝗺𝗮𝗴𝗲 𝗮𝗻𝗻𝗼𝘁𝗮𝘁𝗶𝗼𝗻 and 𝗹𝗮𝗯𝗲𝗹𝗶𝗻𝗴 that empowers AI models to: ✅ Detect potential malignancies at earlier stages ✅ Reduce false positives, minimizing unnecessary biopsies ✅ Enhance radiologists’ confidence in diagnosis ✅ Improve efficiency in clinical workflows With our expertise, healthcare AI systems achieve greater precision in breast imaging, ultimately supporting better patient outcomes. Let’s connect to explore how we can support your AI and medical imaging projects. #MarteckSolutions #Healthcare #AI #ML #Artificialintellegence #MachineLearning #BreastCancerAwareness #MedicalImaging #AIinHealthcare #RadiologyAI #BreastCancerDetection #MedicalAnnotation #HealthcareInnovation
𝗔𝗜-𝗣𝗼𝘄𝗲𝗿𝗲𝗱 𝗠𝗮𝘀𝘀 𝗦𝗲𝗴𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻 𝗶𝗻 𝗠𝗮𝗺𝗺𝗼𝗴𝗿𝗮𝗺𝘀: 𝗗𝗿𝗶𝘃𝗶𝗻𝗴 𝗔𝗰𝗰𝘂𝗿𝗮𝘁𝗲 𝗕𝗿𝗲𝗮𝘀𝘁 𝗖𝗮𝗻𝗰𝗲𝗿 𝗗𝗲𝘁𝗲𝗰𝘁𝗶𝗼𝗻 AI-driven semantic and instance segmentation is revolutionizing the way radiologists detect and evaluate masses in mammograms. By precisely highlighting suspicious regions, AI supports: ✅ 𝗘𝗮𝗿𝗹𝘆 𝗗𝗲𝘁𝗲𝗰𝘁𝗶𝗼𝗻: Identify potential malignancies at the earliest stages ✅ 𝗥𝗲𝗱𝘂𝗰𝗲𝗱 𝗙𝗮𝗹𝘀𝗲 𝗣𝗼𝘀𝗶𝘁𝗶𝘃𝗲𝘀: Minimize unnecessary biopsies and patient anxiety ✅ 𝗘𝗻𝗵𝗮𝗻𝗰𝗲𝗱 𝗖𝗼𝗻𝗳𝗶𝗱𝗲𝗻𝗰𝗲: Assist radiologists in making informed diagnostic decisions ✅ 𝗦𝘁𝗿𝗲𝗮𝗺𝗹𝗶𝗻𝗲𝗱 𝗪𝗼𝗿𝗸𝗳𝗹𝗼𝘄: Support efficient clinical evaluation and reporting At 𝗠𝗮𝗿𝘁𝗲𝗰𝗸 𝗦𝗼𝗹𝘂𝘁𝗶𝗼𝗻𝘀, we specialize in high-precision 𝗺𝗲𝗱𝗶𝗰𝗮𝗹 𝗶𝗺𝗮𝗴𝗲 𝗮𝗻𝗻𝗼𝘁𝗮𝘁𝗶𝗼𝗻 𝗮𝗻𝗱 𝗹𝗮𝗯𝗲𝗹𝗶𝗻𝗴, enabling segmentation models to perform with exceptional accuracy in breast imaging. Our expertise ensures that AI systems receive the most detailed, accurate, and reliable annotations, helping healthcare professionals deliver better patient outcomes. Connect with us to explore how our annotation and labeling services can empower your AI and medical imaging projects. #MarteckSolutions #Healthcare #AI #ML #Artificialintellegence #MachineLearning #MedicalImaging #AIinHealthcare #RadiologyAI #BreastCancerAwareness #MedicalInnovation #HealthcareAI #MassSegmentation #PrecisionDiagnostics
To view or add a comment, sign in
-
-
🚀 How can AI redefine healthcare diagnostics? "AI Mode Is Good" and the innovative skin cancer learning app developed by a dermatologist are two powerful examples of how artificial intelligence is transforming the field of medicine. Key Insights: - AI is enhancing diagnostic accuracy and efficiency in healthcare. - Dermatologists can leverage AI to improve skin cancer detection, potentially saving lives. - The fusion of technology and healthcare is paving the way for personalized medicine. 🔍 Contextual Insights: As AI continues to evolve, its role in healthcare can't be overstated. From automating routine tasks to providing deep analytical insights, AI is setting new standards in patient care. Imagine a world where AI assists in identifying skin conditions with precision, allowing dermatologists to focus on treatment and patient care. 💬 What are your thoughts on AI's potential in healthcare? Have you experienced or witnessed any AI-driven innovations in your field? Share your insights! 👉 If you found this interesting, share it with your network to spread the word on AI in healthcare. Reflecting on this transformation, it's essential to embrace technology while maintaining the human touch in healthcare. As we innovate, let's ensure that compassion remains at the heart of patient care. #AIinHealthcare #Innovation #SkinCancer #TechnologyInMedicine #AI #HealthcareRevolution
To view or add a comment, sign in
-
-
Applications of ML/DL are actually used in radiotherapy today 1) Imaging & contouring Auto-segmentation (OARs & targets): U-Net–style models speed up and standardize contours across many disease sites; large reviews and multi-site evaluations show strong performance but emphasize clinical QA and dosimetric checks Image registration & motion: Deep models (including GAN/unsupervised) improve deformable registration for CBCT/MR guidance and inter-fraction alignment Image enhancement & synthesis: DL denoises and artifact-reduces CBCT, and generates synthetic CT (sCT) from MR/CBCT for MR-only workflows and adaptive replanning. Multiple recent reviews and clinical feasibility studies back this up 2) Treatment planning Dose/DVH prediction & knowledge-based planning (KBP): Models predict voxelwise dose and DVHs to guide auto-planning and flag sub-optimal plans—now studied across many sites and validated in challenges like OpenKBP. Fully/semiautomated planning: KBP and DL predictors feed optimization to generate clinically acceptable plans with reduced iteration time. 3) Online & offline adaptive RT ART pipeline acceleration: AI helps correct CBCT/MR images, propagate/segment structures, and generate rapid re-plans during a session (e.g., prostate), improving consistency and speed. 4) Quality assurance & safety Automated plan QA: ML predicts gamma pass rates/flags risky plans; rules-plus-ML systems support automated chart checks. QA for AI itself: Best-practice frameworks outline technical, dosimetric, and clinical evaluation before deployment. 5) Outcomes, toxicity & decision support (radiomics/dosiomics) Outcome prediction: Image- and dose-derived features with ML predict control/survival and guide risk stratification 6) NLP on the clinical workflow From notes to data: NLP extracts toxicities, treatments, and events from EMR text; also classifies incident-learning reports (RO-ILS/NSIR-RT) to improve safety loops. #Innovation_Radiotherapy #AI_RT #Conformity_AI
To view or add a comment, sign in
-
-
#AI is revolutionizing chest #Xray and #CTscan analysis for #pulmonarydiseases by detecting patterns faster, supporting triage and helping #doctors prioritize urgent cases, leading to quicker diagnoses. Via https://lnkd.in/dmVGYyve #artificialintelligence #aiinhealthcare
To view or add a comment, sign in