SlideShare a Scribd company logo
Laman Mammadova
AI enthusiast
AI in Healthcare
From Algorithms to Applications
Understanding the
Hierarchy: Artificial
Intelligence,
Machine Learning,
and Deep Learning
Artificial Intelligence, Machine
Learning, and Deep Learning
Artificial Intelligence is machines’ capability to
perform tasks typically associated with human
intelligence such as learning, reasoning, problem-
solving, perception, and decision-making.
Artificial
Intelligence
Machine
Learning
Deep
Learning
Machine Learning is the field of Artificial
Intelligence that focuses on development of
systems that can learn from data and generalise
to unseen data, and thus perform tasks without
explicit instructions.
Deep Learning is a subset of Machine Learning
that uses a specific type of ML algorithm inspired
by the structure and function of the human brain:
Artificial Neural Networks.
Applications of
Artificial Intelligence
in Healthcare
Image slides
AI can detect patterns in medical data that may be too
subtle for the human eye, using these patterns to identify
patients who are at risk of developing diseases years before
they are clinically diagnosed.
Early Diagnosis
Google Health's AI for Breast Cancer Diagnosis
● Trained on Large Datasets: Over 76,000
mammograms (U.K.) and 15,000 (U.S.) used for
training.
● Improved Accuracy: Reduced false positives by 5.7%
in the USA and 1.2% in the UK, and false negatives by
9.4% in the USA and 2.7% in the UK.
● Outperformed Radiologists: AI showed 11.5% higher
accuracy than human radiologists.
● Reduced Workload: reduced the workload of the
second radiologist by 88%, as it handled the majority
of the screening, allowing the radiologist to focus only
on more complex cases flagged by the AI.
Drug Discovery
AI helps speed up the process of
discovering new drugs by analyzing
biological data, predicting how different
compounds will interact with diseases, and
identifying promising drug candidates
faster and more efficiently than traditional
methods.
Rentosertib is the first fully AI-designed
drug to reach clinical trials. Developed to
treat idiopathic pulmonary fibrosis (IPF),
both the biological target and the
therapeutic compound were identified
using generative AI, marking a significant
breakthrough in the use of AI for drug
discovery.
Robotic Surgery
AI enhances robotic-assisted surgery by
improving precision, control, and real-time
decision-making. It helps surgeons plan
procedures, avoid complications, and adapt
during surgery by analyzing medical images and
tracking instrument movement with high
accuracy.
The da Vinci system, one of the most widely
used robotic surgery platforms, incorporates AI
to assist in tasks like stabilizing instrument
motion, filtering tremors, and suggesting optimal
surgical paths based on anatomical data. Newer
systems are also using AI to analyze video
footage in real time, helping predict
complications or guide surgeons during complex
operations.
The Complex Nature
of Healthcare Data
and Its Impact on AI
Challenges in
Healthcare Data
for AI
Development
Data Quality
Incomplete, noisy,
and inconsistent
data
Data Diversity
Underrepresentatio
n of certain
populations
Data Privacy &
Security
Sensitive data with
strict regulations
Data Complexity
Multidimensional
and interrelated
data
Regulatory
Challenges
Slow and resource-
intensive processes
Data Availability
Lack of large, high-
quality datasets
Ethics and
Responsibility in
Healthcare AI
Bias and Fairness
Risks of biased AI
impacting certain
groups.
Transparency and
Explainability
Need for interpretable
AI decision-making.
Autonomy vs. AI
Assistance
Balancing AI support
with human decision-
making.
Data Privacy and
Consent
Ensuring patient data
is used appropriately.
Accountability and
Liability
Who’s responsible for
AI errors?
Access and Equity
Ensuring equal access
to AI technologies.
Timeline of
Breakthroughs and
Challenges in AI for
Healthcare
1971
INTERNIST-1
The first AI-based clinical
diagnostic system that used
rule-based logic and a medical
knowledge base to simulate
the clinical reasoning of an
experienced physician.
1986
MIT releases DXplain
A system that used a ranked
list of possible diagnoses with
explanations and learning
materials with disease
overviews for 500 diseases—
now expanded to more than
2,600 conditions.
2011
IBM Watson wins Jeopardy!
If Watson can find the right
answer from millions of
possibilities in a quiz show
maybe it can help doctors find
the right diagnosis or
treatment from thousands of
diseases or drugs!
2016
Google develops a DL model
for diabetic retinopathy
diagnosis
Using CNNs, it was trained on
hundreds of thousands of
annotated retinal fundus
images. When evaluated, it
performed on par with
licensed ophthalmologists in
detecting signs of eye disease.
2015
IBM launches Watson for
Oncology
Hospitals in countries like
Thailand, India, and South
Korea signed adoption
agreements, drawn by the
promise of bringing world-
class cancer care to
underserved regions.
2017
Arterys earns FDA approval
It utilized deep learning
algorithms to automate the
analysis of cardiac MRI images,
taking an average of 15
seconds to produce a result for
one case.
2020
AlphaFold predicts the
3D structures of
proteins
For 50 years, scientists
were trying to solve a
biological Rubik’s Cube
blindfolded. AlphaFold
came along and solved
it instantly, every time,
with stunning accuracy
— using AI.
What’s next for AI in
Healthcare?
Practical
Implementation:
Breast Cancer
Diagnosis
Resources
a) radius (mean of distances from center to points on the
perimeter)
b) texture (standard deviation of gray-scale values)
c) perimeter
d) area
e) smoothness (local variation in radius lengths)
f) compactness (perimeter^2 / area - 1.0)
g) concavity (severity of concave portions of the contour)
Breast Cancer Wisconsin Dataset
h) concave points (number of concave
portions of the contour)
i) symmetry
j) fractal dimension ("coastline
approximation" - 1)
Resources
How AI Diagnoses Tumors: Two Models We’ll Compare
A simple model that draws a boundary
between benign and malignant tumors based
on their features.
It calculates the probability of cancer using all
features at once and classifies based on
where the tumor falls relative to the
boundary.
An ensemble model that builds many small
decision trees, each making a diagnosis
based on different feature rules.
The final prediction is made by majority vote
— whichever label most trees agree on:
benign or malignant.
THANK YOU!

More Related Content

PPTX
Ai in healthcare
PDF
AI in Healthcare: Defining New Health
PDF
Who are the players and use cases of #AI in healthcare
PDF
Md vs machine AI in Healthcare by Dr.Mahboob Khan Phd
PPTX
ai ca11.pptx
PPTX
PPt of artificial intelligence on health
PDF
ACCA Version of AI & Healthcare: An Overview for the Curious
PPTX
AI.pptx
Ai in healthcare
AI in Healthcare: Defining New Health
Who are the players and use cases of #AI in healthcare
Md vs machine AI in Healthcare by Dr.Mahboob Khan Phd
ai ca11.pptx
PPt of artificial intelligence on health
ACCA Version of AI & Healthcare: An Overview for the Curious
AI.pptx

Similar to AI in Healthcare: From Algorithms to Applications (20)

PPTX
Artificial Intelligence in Oncology: Transforming Cancer Carepptx
PPTX
Artificial Intelligence in Healthcare Sector
PPTX
KRITIKASHARMAPPT.pptxxxxxxxxccccccccccccvvvv
PDF
Artificial Intelligence in Healthcare.pdf
PPTX
Role of artificial intelligence in health care
PDF
HealthXL Artificial Intelligence Working Group Report
PDF
HeathXL report on use cases for Big Data and AI
PDF
Artificial Intelligence in Pharma and Care Delivery- Delivering on the Promise
PDF
AI in Healthcare Industry - Quality Management Software in Helathcare Industr...
PDF
Precision Algorithms in Healthcare: Improving treatments with AI
PDF
Artificial Intelligence: Connecting the dots and relieving the pain in Health...
PPTX
Artificial intelligence in healthcare
PPTX
Artificial-Intelligence-in-Healthcare-Part-1 3.pptx
PPTX
Artificial-Intelligence-in-Healthcare-Part-1 3.pptx
PDF
HealthXL: How Artificial Intelligence (AI) Can Improve Research & Care Models...
PPTX
Ai in healthcare
PDF
Careers in an AI World
PPTX
AI_in_Healthcare_Presentation_Enhanced.pptx
PDF
AI in Healthcare Quality: Advances and Ethical Concerns
PPTX
AI Development in Healthcare: Advancements & Challenges
Artificial Intelligence in Oncology: Transforming Cancer Carepptx
Artificial Intelligence in Healthcare Sector
KRITIKASHARMAPPT.pptxxxxxxxxccccccccccccvvvv
Artificial Intelligence in Healthcare.pdf
Role of artificial intelligence in health care
HealthXL Artificial Intelligence Working Group Report
HeathXL report on use cases for Big Data and AI
Artificial Intelligence in Pharma and Care Delivery- Delivering on the Promise
AI in Healthcare Industry - Quality Management Software in Helathcare Industr...
Precision Algorithms in Healthcare: Improving treatments with AI
Artificial Intelligence: Connecting the dots and relieving the pain in Health...
Artificial intelligence in healthcare
Artificial-Intelligence-in-Healthcare-Part-1 3.pptx
Artificial-Intelligence-in-Healthcare-Part-1 3.pptx
HealthXL: How Artificial Intelligence (AI) Can Improve Research & Care Models...
Ai in healthcare
Careers in an AI World
AI_in_Healthcare_Presentation_Enhanced.pptx
AI in Healthcare Quality: Advances and Ethical Concerns
AI Development in Healthcare: Advancements & Challenges
Ad

Recently uploaded (20)

PPTX
Importance of Immediate Response (1).pptx
PPTX
Rheumatic heart diseases with Type 2 Diabetes Mellitus
PPTX
community services team project 2(4).pptx
PPTX
ABG advance Arterial Blood Gases Analysis
PPTX
PEDIATRIC OSCE, MBBS, by Dr. Sangit Chhantyal(IOM)..pptx
PPTX
Nursing Care Aspects for High Risk newborn.pptx
PPTX
Galactosemia pathophysiology, clinical features, investigation and treatment ...
PPTX
Medical aspects of impairment including all the domains mentioned in ICF
PDF
2E-Learning-Together...PICS-PCISF con.pdf
PPTX
NUTRITIONAL PROBLEMS, CHANGES NEEDED TO PREVENT MALNUTRITION
PDF
NUTRITION THROUGHOUT THE LIFE CYCLE CHILDHOOD -AGEING
PPTX
Immunity....(shweta).................pptx
PPTX
First aid in common emergency conditions.pptx
PDF
Myers’ Psychology for AP, 1st Edition David G. Myers Test Bank.pdf
PPTX
Trichuris trichiura infection
PPTX
First Aid and Basic Life Support Training.pptx
PPT
Recent advances in Diagnosis of Autoimmune Disorders
PPTX
Infection prevention and control for medical students
PPTX
3. Adherance Complianace.pptx pharmacy pci
PPTX
AI_in_Pharmaceutical_Technology_Presentation.pptx
Importance of Immediate Response (1).pptx
Rheumatic heart diseases with Type 2 Diabetes Mellitus
community services team project 2(4).pptx
ABG advance Arterial Blood Gases Analysis
PEDIATRIC OSCE, MBBS, by Dr. Sangit Chhantyal(IOM)..pptx
Nursing Care Aspects for High Risk newborn.pptx
Galactosemia pathophysiology, clinical features, investigation and treatment ...
Medical aspects of impairment including all the domains mentioned in ICF
2E-Learning-Together...PICS-PCISF con.pdf
NUTRITIONAL PROBLEMS, CHANGES NEEDED TO PREVENT MALNUTRITION
NUTRITION THROUGHOUT THE LIFE CYCLE CHILDHOOD -AGEING
Immunity....(shweta).................pptx
First aid in common emergency conditions.pptx
Myers’ Psychology for AP, 1st Edition David G. Myers Test Bank.pdf
Trichuris trichiura infection
First Aid and Basic Life Support Training.pptx
Recent advances in Diagnosis of Autoimmune Disorders
Infection prevention and control for medical students
3. Adherance Complianace.pptx pharmacy pci
AI_in_Pharmaceutical_Technology_Presentation.pptx
Ad

AI in Healthcare: From Algorithms to Applications

  • 1. Laman Mammadova AI enthusiast AI in Healthcare From Algorithms to Applications
  • 3. Artificial Intelligence, Machine Learning, and Deep Learning Artificial Intelligence is machines’ capability to perform tasks typically associated with human intelligence such as learning, reasoning, problem- solving, perception, and decision-making. Artificial Intelligence Machine Learning Deep Learning Machine Learning is the field of Artificial Intelligence that focuses on development of systems that can learn from data and generalise to unseen data, and thus perform tasks without explicit instructions. Deep Learning is a subset of Machine Learning that uses a specific type of ML algorithm inspired by the structure and function of the human brain: Artificial Neural Networks.
  • 5. Image slides AI can detect patterns in medical data that may be too subtle for the human eye, using these patterns to identify patients who are at risk of developing diseases years before they are clinically diagnosed. Early Diagnosis Google Health's AI for Breast Cancer Diagnosis ● Trained on Large Datasets: Over 76,000 mammograms (U.K.) and 15,000 (U.S.) used for training. ● Improved Accuracy: Reduced false positives by 5.7% in the USA and 1.2% in the UK, and false negatives by 9.4% in the USA and 2.7% in the UK. ● Outperformed Radiologists: AI showed 11.5% higher accuracy than human radiologists. ● Reduced Workload: reduced the workload of the second radiologist by 88%, as it handled the majority of the screening, allowing the radiologist to focus only on more complex cases flagged by the AI.
  • 6. Drug Discovery AI helps speed up the process of discovering new drugs by analyzing biological data, predicting how different compounds will interact with diseases, and identifying promising drug candidates faster and more efficiently than traditional methods. Rentosertib is the first fully AI-designed drug to reach clinical trials. Developed to treat idiopathic pulmonary fibrosis (IPF), both the biological target and the therapeutic compound were identified using generative AI, marking a significant breakthrough in the use of AI for drug discovery.
  • 7. Robotic Surgery AI enhances robotic-assisted surgery by improving precision, control, and real-time decision-making. It helps surgeons plan procedures, avoid complications, and adapt during surgery by analyzing medical images and tracking instrument movement with high accuracy. The da Vinci system, one of the most widely used robotic surgery platforms, incorporates AI to assist in tasks like stabilizing instrument motion, filtering tremors, and suggesting optimal surgical paths based on anatomical data. Newer systems are also using AI to analyze video footage in real time, helping predict complications or guide surgeons during complex operations.
  • 8. The Complex Nature of Healthcare Data and Its Impact on AI
  • 9. Challenges in Healthcare Data for AI Development Data Quality Incomplete, noisy, and inconsistent data Data Diversity Underrepresentatio n of certain populations Data Privacy & Security Sensitive data with strict regulations Data Complexity Multidimensional and interrelated data Regulatory Challenges Slow and resource- intensive processes Data Availability Lack of large, high- quality datasets
  • 10. Ethics and Responsibility in Healthcare AI Bias and Fairness Risks of biased AI impacting certain groups. Transparency and Explainability Need for interpretable AI decision-making. Autonomy vs. AI Assistance Balancing AI support with human decision- making. Data Privacy and Consent Ensuring patient data is used appropriately. Accountability and Liability Who’s responsible for AI errors? Access and Equity Ensuring equal access to AI technologies.
  • 12. 1971 INTERNIST-1 The first AI-based clinical diagnostic system that used rule-based logic and a medical knowledge base to simulate the clinical reasoning of an experienced physician. 1986 MIT releases DXplain A system that used a ranked list of possible diagnoses with explanations and learning materials with disease overviews for 500 diseases— now expanded to more than 2,600 conditions. 2011 IBM Watson wins Jeopardy! If Watson can find the right answer from millions of possibilities in a quiz show maybe it can help doctors find the right diagnosis or treatment from thousands of diseases or drugs! 2016 Google develops a DL model for diabetic retinopathy diagnosis Using CNNs, it was trained on hundreds of thousands of annotated retinal fundus images. When evaluated, it performed on par with licensed ophthalmologists in detecting signs of eye disease. 2015 IBM launches Watson for Oncology Hospitals in countries like Thailand, India, and South Korea signed adoption agreements, drawn by the promise of bringing world- class cancer care to underserved regions. 2017 Arterys earns FDA approval It utilized deep learning algorithms to automate the analysis of cardiac MRI images, taking an average of 15 seconds to produce a result for one case. 2020 AlphaFold predicts the 3D structures of proteins For 50 years, scientists were trying to solve a biological Rubik’s Cube blindfolded. AlphaFold came along and solved it instantly, every time, with stunning accuracy — using AI.
  • 13. What’s next for AI in Healthcare?
  • 15. Resources a) radius (mean of distances from center to points on the perimeter) b) texture (standard deviation of gray-scale values) c) perimeter d) area e) smoothness (local variation in radius lengths) f) compactness (perimeter^2 / area - 1.0) g) concavity (severity of concave portions of the contour) Breast Cancer Wisconsin Dataset h) concave points (number of concave portions of the contour) i) symmetry j) fractal dimension ("coastline approximation" - 1)
  • 16. Resources How AI Diagnoses Tumors: Two Models We’ll Compare A simple model that draws a boundary between benign and malignant tumors based on their features. It calculates the probability of cancer using all features at once and classifies based on where the tumor falls relative to the boundary. An ensemble model that builds many small decision trees, each making a diagnosis based on different feature rules. The final prediction is made by majority vote — whichever label most trees agree on: benign or malignant.

Editor's Notes

  • #3: 🔹 Slide 1: Artificial Intelligence, Machine Learning, and Deep Learning (Expanded Script) Let’s begin by making sense of three terms that often get thrown around — Artificial Intelligence, Machine Learning, and Deep Learning. At first glance, they can sound intimidating or even interchangeable. But once we break them down and show how they’re connected, you’ll start to see the bigger picture — and it’ll make the rest of this talk much easier to follow. 🟦 First, Artificial Intelligence — or AI AI is the broadest concept here. Think of it as the entire universe of machines doing things that normally require human intelligence. What kinds of things? Tasks like understanding speech, recognizing images, solving problems, making decisions, or even learning and adapting over time. So if a machine can mimic human behavior in any way that seems intelligent — even just a little bit — we can call it AI. Now, that doesn’t mean robots with emotions or sci-fi-level intelligence. In fact, most AI today is very specific and narrow — it’s designed to do just one task very well, like predicting disease risk from patient data or analyzing medical images. Let’s say you’re at a hospital, and the computer system is flagging patients who might develop complications based on their test results and history. That’s AI — it’s helping doctors make better decisions by mimicking a small part of human reasoning. 🟦 Inside AI, we have Machine Learning Machine Learning — or ML — is a subset of AI. This is where we stop manually coding rules, and we start teaching machines by example. In traditional programming, we would give the computer all the rules ourselves. For instance: “If blood pressure is above 140 AND heart rate is above 100, then raise an alert.” But that only works if we know all the rules. And in medicine — as you probably know — things are rarely that simple. Symptoms vary. Exceptions happen. People respond differently. And sometimes, even doctors can’t clearly explain how they made a diagnosis — it’s often based on years of pattern recognition. So Machine Learning takes a different route. We say: “Here are 100,000 patient records — along with their actual diagnoses. You figure out what patterns link the data to the outcomes.” That’s machine learning — learning from past examples to make future predictions. There are two major ways machines learn this way: 1️⃣ Supervised Learning This is when we give the machine both the data and the correct answers. Think of it like flashcards. We show it thousands of chest X-rays, each labeled with “pneumonia” or “healthy.” The machine starts to notice which visual patterns go with which diagnosis. In healthcare, supervised learning is used for tasks like: Predicting whether a tumor is malignant or benign Forecasting hospital readmission risk Classifying skin lesions based on photos It’s called "supervised" because we are supervising the learning — we’re giving it the right answers to learn from. 2️⃣ Unsupervised Learning Here, we just give the machine data with no labels — and ask it to make sense of it. Imagine dumping in thousands of patient profiles with no diagnosis. The algorithm might find that some patients naturally group together based on shared symptoms or genetic profiles — even if no human had ever noticed that pattern before. This can lead to amazing discoveries — like identifying unknown subtypes of a disease, which might respond differently to treatment. In short: Supervised learning is giving the machines data with labels. Unsupervised learning is giving the machines only the data and relying on it to identify important features. 🟦 And finally, Deep Learning Now we zoom in even further — Deep Learning is a subset of Machine Learning, and it’s especially good at working with complex, high-dimensional data like images, audio, text, or even DNA sequences. What makes Deep Learning different? In traditional ML, we often had to tell the algorithm what features to look at. For example, if we’re trying to classify tumors in MRI scans, we might manually extract features like: The texture of the mass The symmetry The smoothness of the borders But what if we miss something subtle? What if we don’t know what features really matter? Deep Learning solves that by learning the features automatically from the raw data. We just feed it the image — and it figures out what’s important on its own. How? Through a structure called a Neural Network — inspired by how our brains work. These networks are made up of layers. Each layer learns something a bit more complex than the previous one: The first layer might detect edges and lines The next one might find shapes and contours The final layers might combine everything and say, “This looks like a tumor.” And the more data we give it, the better it gets — even if we don’t always understand how it’s making the decision. That’s why deep learning models can sometimes outperform experienced radiologists in detecting certain cancers — not because they’re smarter, but because they’ve seen more images than any human ever could in a lifetime. And that’s the power of deep learning — especially in healthcare, where data is complex, high-stakes, and full of patterns we might not even see. So to summarize: AI is the broad idea — machines doing intelligent things. Machine Learning is one way to build that intelligence — by learning from data. Deep Learning is the most advanced form — using neural networks to handle very complex patterns. These concepts build on each other — and you’ll see how they play out in real healthcare applications as we move forward.
  • #5: One of the most powerful, and frankly life-saving, applications of AI in healthcare today is early disease diagnosis. Why does this matter so much? Because the earlier a disease is detected, the more treatable it usually is — and the higher the chances of survival. Take breast cancer, for example. Every year, over 2.3 million women around the world are diagnosed with breast cancer. And for many of them, the diagnosis comes too late. Let’s break this down: If breast cancer is detected at Stage 1, when the tumor is still small and hasn’t spread, the 5-year survival rate is over 99%. But if it’s found at Stage 4, when it’s already spread out to other parts of the body, that rate drops to about 30%. So the question becomes: How can we catch it earlier? Traditionally, breast cancer is detected through mammography, where radiologists examine X-ray images of the breast to spot potential abnormalities. But here’s the challenge: These images can be subtle and complex. Tumors don’t always look obvious. Breast tissue varies greatly between patients. And fatigue, bias, or variation in training can all affect what a radiologist sees. This is where AI — specifically, Deep Learning — offers an incredible opportunity.” 🤖 How AI Helps “Using deep learning algorithms, AI systems can be trained on tens of thousands of mammograms, learning to recognize even the faintest signals that might indicate cancer. What makes deep learning so powerful is that it can learn directly from the raw pixel data — it doesn’t rely on predefined features like size or shape. Instead, it finds patterns that humans may not even be aware of. And one of the most impressive real-world implementations of this is by Google Health.” 🔬 Case Study: Google Health’s AI for Breast Cancer Detection “In 2020, Google Health published a major study in Nature showcasing an AI model trained to analyze mammograms and help detect breast cancer. Let’s explore what made this model stand out.” 1️⃣ Trained on Massive, Diverse Datasets “The AI was trained on over 91,000 mammograms: 76,000 from the United Kingdom 15,000 from the United States These mammograms were linked to confirmed outcomes — whether cancer was later diagnosed — allowing the model to learn from real-world, clinically verified examples. That volume is extraordinary — it’s more than a radiologist would see in an entire career. What’s more, the data included cases from multiple screening programs, improving the generalizability of the AI model across different populations and imaging equipment.” 2️⃣ Improved Accuracy Over Radiologists “When the model was evaluated, it was compared to expert human radiologists — and the results were striking: In the U.S. test set, the AI reduced: False positives by 5.7% False negatives by 9.4% In the U.K. test set, it reduced: False positives by 1.2% False negatives by 2.7% These are real cases where someone might have been: Wrongly told they had cancer and gone through unnecessary stress and procedures Or missed entirely and left untreated until it was too late AI significantly lowered both risks — making screening both safer and more accurate.” 3️⃣ Outperformed Human Experts “In head-to-head comparisons, Google’s AI model showed 11.5% higher accuracy than human radiologists. That doesn’t mean it’s better than doctors in every way — but it means it can spot things we might overlook, especially when fatigue, image ambiguity, or subtlety come into play. And in medicine, a 10% improvement in detection is not a small win — it’s potentially thousands of lives. 4️⃣ Massively Reduced Radiologist Workload “In the U.K., where each mammogram is typically reviewed by two radiologists, the AI was tested as a first reader. It was able to correctly review the vast majority of cases, meaning the second radiologist only had to focus on edge cases flagged by the AI. This led to an 88% reduction in the workload of the second radiologist — freeing up valuable time and attention. That’s not just efficiency. That’s making it possible for overburdened health systems to keep up with demand — without sacrificing quality.” 🔄 AI as a Second Pair of Eyes — Not a Replacement “Importantly, this doesn’t mean AI replaces doctors. It means it works alongside them. It acts as: A safety net to catch what the human eye might miss A time-saver, helping prioritize the riskiest cases A tool, allowing radiologists to focus on complex diagnostics rather than routine review So When we talk about early diagnosis with AI we’re talking about: Better accuracy Fewer missed cases Less stress for patients More time for doctors And ultimately… more lives saved.
  • #6: 🔹 Slide: AI in Drug Discovery Now let’s switch gears a bit — and talk about one of the most exciting frontiers where AI is making a huge impact: drug discovery. This is an area that affects every one of us, even if we don’t realize it. Every pill, every vaccine, every treatment you or your loved ones have ever received — at some point, it had to be discovered, tested, and approved. But what exactly is drug discovery? Well, drug discovery is the long and complicated process of finding a new medicine that can safely treat a disease. And traditionally, this process has been incredibly slow, expensive, and unpredictable. Let’s walk through the traditional steps, just to appreciate the challenge. ⚗️ Step 1: Understand the disease Scientists first need to deeply understand what’s happening in the body. What causes the symptoms? What cells are involved? What goes wrong at the molecular level? For example, in cancer, you might want to know which genes are overactive and causing cells to divide uncontrollably. 🧬 Step 2: Identify a biological target This means finding a specific molecule in the body — often a protein — that plays a key role in the disease. Think of this like finding the “off switch” in a broken machine. If we can block or change that protein, we might be able to stop or slow down the disease. 💊 Step 3: Test drug candidates Now the hard part begins. Scientists test thousands — sometimes millions — of molecules to see which ones might bind to the target and change its behavior. It’s like searching for the right key that fits a very specific lock. And each test costs time, money, and resources. Even when a promising molecule is found, it needs to go through years of preclinical testing, clinical trials, and safety evaluations. All of this can take 10 to 15 years and cost over a billion dollars — for just one successful drug. And many attempts fail along the way. 🤖 So where does AI come in? This is where Artificial Intelligence is changing the game — especially a branch called Generative AI. Instead of testing every molecule in the lab, we can now use AI to simulate, predict, and even create new drug candidates — using data and computation instead of trial-and-error experiments. Let’s break that down: AI can analyze massive databases of biological, chemical, and genetic information — far more than any human team could process. It can learn patterns — for example, it can recognize how certain types of molecules tend to interact with certain proteins. And then — using those learned patterns — it can predict which proteins might be involved in a disease, even ones that haven’t been studied much before. Most impressively, generative AI can actually design new molecules from scratch — ones that have never existed before — but that have the right chemical properties to potentially bind to the disease-causing proteins. This is like giving the AI the rules of the game, and letting it invent new players — players that might just win. 💡 Real-World Breakthrough: Rentosertib To make this more tangible, let me tell you a real story — not from the future, but from right now. There’s a company called Insilico Medicine that used AI to create a brand-new drug called Rentosertib. Rentosertib has been developed to treat a very serious disease called idiopathic pulmonary fibrosis, or IPF. It’s a rare lung disease that causes scarring of the lungs — and makes it harder and harder to breathe. There’s no known cure yet. Here’s what’s groundbreaking: First, AI was used to analyze biological data and identify a protein called TNIK — which no one had previously targeted for this disease. That was the AI’s first job: figure out what might be causing the problem. Then, a second AI system was used to design a brand-new molecule — Rentosertib — that could bind to TNIK and potentially slow or stop the disease. And here’s the stunning part: This entire process — from identifying the target to designing the drug — took less than 30 months. Insilico Medicine showed us what’s possible when we treat AI not as a tool, but as a scientific collaborator. With the help of AI, we can: Discover new treatments for diseases that have stumped scientists for decades Reduce development time and cost, making medicine more accessible Reduce the need for animal testing and early-stage lab failures.
  • #7: First, it’s important to understand that robotic-assisted surgery has been around for a while. The concept itself isn’t brand new. But what is new is how AI is being embedded into these systems — making them more precise, more responsive, and more intelligent than ever before. 🦾 What is robotic-assisted surgery? At its core, robotic surgery means that instead of holding the surgical tools by hand, a surgeon uses a robotic system to control them — often with very fine, high-precision movements. So it’s not the robot operating by itself. The surgeon is still fully in control, often seated at a console a few feet away, using joysticks or hand controls while watching a 3D view of the surgical site. These robots offer: Better precision, especially in delicate procedures Smaller incisions Less bleeding Faster recovery for patients AI’s applications here are First in 📸 1. Image Analysis and Navigation AI can analyze preoperative medical scans — like CT or MRI images — and help the surgical team plan their approach before they even make an incision. For example: It can highlight critical areas to avoid, like nerves or blood vessels It can help identify the exact size and position of a tumor And during the operation, AI can continuously process live camera footage to track what’s happening in real time. It acts like a GPS — helping guide the surgeon’s tools to the right spot, and making sure every movement is on target. This kind of assistance reduces human error and improves safety. ✋ 2. Motion Scaling and Tremor Reduction Even the steadiest human hands have natural tremors — very tiny, involuntary movements. AI-powered systems can detect those tiny tremors and filter them out, ensuring smoother movements of the surgical instruments. Also, if a surgeon’s hand moves 1 inch, the robot might scale that down to just 1 millimeter — this is called motion scaling, and it allows for incredible precision. This is especially useful in microsurgery — such as procedures in the eye, brain, or heart — where even a tiny slip can have serious consequences. 🧠 3. Surgical Workflow Assistance AI doesn’t just help with hand control. It also helps with decision-making support during surgery. As the operation progresses, AI can: Track what stage of the procedure the surgeon is in Suggest next steps Flag unusual situations — like unexpected bleeding or tissue appearance Think of it like a really smart assistant — always paying attention, never getting tired, and trained on thousands of surgeries. 🎓 4. Training the Next Generation of Surgeons Robotic systems can record the performance of a trainee and compare it to expert benchmarks. They can measure things like: Time taken for each step Smoothness of movements Accuracy of incisions Then they can provide data-driven feedback — helping trainees improve faster and more objectively than with traditional methods. ⚠️ What AI is not legally allowed to do Now, before we get too excited, let’s talk about what AI cannot do — at least not yet. AI is not legally allowed to perform surgeries on its own. In every country with established medical safety standards — including the US, UK, and Europe — surgery must be performed under the control of a licensed human surgeon. Even if the AI is suggesting actions or guiding movements, the final decision must be made by a human. And this makes sense — because surgery is complex, and every patient is different. We’re still a long way from trusting machines to operate without any oversight. Regulatory agencies like: The FDA (in the US), The EMA (in Europe), and The MHRA (in the UK), all require rigorous testing, clinical trials, and human supervision before any AI system can be used in a surgical setting. So when we talk about “AI in surgery,” we’re talking about human-AI collaboration, not full automation. 🛠️ Real Example: The da Vinci Surgical System It’s one of the most widely used robotic surgery platforms in the world — and it uses AI in many of the ways we just discussed: To stabilize movements To filter tremors To recommend optimal cutting paths based on preoperative scans In more advanced versions, it can also analyze live video during surgery — helping spot potential issues early. And even though the robot never acts alone, it becomes a powerful extension of the surgeon’s own hands and eyes, enhanced by AI’s speed, memory, and pattern recognition. So to sum up: AI in robotic surgery isn’t about replacing doctors — it’s about amplifying them. Giving them more precision Supporting better decisions Improving patient outcomes And training the next generation of skilled surgeons It’s one of the clearest examples of how AI and humans can work together — combining the machine’s processing power with the human’s judgment and care.
  • #9: The challenges. Because the truth is — no matter how advanced or accurate your algorithm is, it’s only as good as the data it learns from. And in healthcare, that data is... complicated. There are many obstacles that make working with medical data uniquely difficult — for researchers, data scientists, and even doctors. So let’s walk through six of the biggest challenges you’ll encounter when building AI for healthcare. 🟣 1. Data Quality The first — and probably the most foundational — is data quality. Think about a patient’s medical record. Ideally, it should tell the full story: test results, medications, diagnoses, symptoms, history... But in reality, that data is often: Incomplete — missing values, tests that were never uploaded, or unrecorded outcomes. Inconsistent — one doctor might describe a symptom one way, another might use different wording. Noisy — with errors, irrelevant details, or even contradictions. You might even have handwritten notes scanned as images — with no standard format, making them hard to read or extract information from. 🟣 2. Data Availability Even when the data is clean, there’s often not enough of it. In AI, especially in fields like deep learning, models need large, high-quality, labeled datasets to learn from. But in healthcare, such datasets are rare. Because Some diseases are rare, so there's naturally less data Getting data labeled by experts (like radiologists or pathologists) is expensive and time-consuming And most hospitals don’t want to — or legally can’t — share their data with outside researchers So researchers often have to work with small datasets that might not generalize well to broader patient populations. And this limits the power and reliability of the AI models we can build. 🟣 3. Data Privacy and Security And this brings us to the third challenge — privacy and security. Healthcare data is some of the most sensitive information that exists. It includes: Personal identity Medical conditions Treatment history Genetic data Sharing or mishandling this information can have real consequences — legally, ethically, and emotionally. That’s why we have strict regulations like: HIPAA in the U.S. GDPR in the EU MHRA/EMA guidelines in the UK and Europe These laws are absolutely essential — they protect patients. But they also make it very hard to share data between hospitals, countries, or even research teams. And this tension — between innovation and regulation — is one of the biggest bottlenecks in the field. 🟣 4. Data Diversity Now, even if we have data — and it’s clean, labeled, and shareable — there’s still a major issue we often overlook: data diversity. Many datasets in healthcare are not representative of the full population. They might contain mostly: Middle-aged, white patients From urban areas With access to high-quality hospitals But what about: Children? Elderly patients? Rural communities? Low-income populations? If these groups are underrepresented in the training data, the AI models built from that data can be biased or even dangerous when applied to them. This has already been seen in real-world examples, like pulse oximeters that were less accurate on darker skin tones, So diversity in data isn’t just a technical issue — it’s a health equity issue. 🟣 5. Data Complexity Next, we have data complexity. Healthcare data isn’t like sales data or social media data. It’s not just numbers in a spreadsheet. It’s multidimensional and interconnected, and includes things like: Medical images (e.g., MRIs, CT scans) Genomic sequences Clinical notes written in natural language Time-series data from monitors or wearable devices Lab results, prescriptions, patient histories… And often, we need to combine all of this to make a meaningful prediction or diagnosis. This requires models that can handle different types of inputs, extract relationships between them, and reason over time — which is technically very challenging. 🟣 6. Regulatory Challenges And finally — even once you’ve solved all of those problems and built a great model — you still have to get it approved. Healthcare is a highly regulated industry, and rightly so — lives are on the line. So any new AI tool must go through: Extensive validation Clinical trials Ethical reviews Safety evaluations And this can take years — with high costs and limited support. This means that even promising innovations often get stuck in the lab, unable to make it to hospitals or clinics. It’s not just about building good AI — it’s about getting it through the system safely and responsibly. ✅ Wrap-Up So to wrap this up: While AI holds enormous potential in healthcare, realizing that potential means overcoming these six major data challenges. Because in the end, better data doesn’t just make better models — it makes safer, fairer, and more effective healthcare for everyone. And if we want AI to improve lives — not just in theory, but in real clinics, for real people — then solving the data problem is just as important as solving the algorithm. Let me know when you’re ready for the next section!
  • #10: 🔹 Slide: Ethics and Responsibility in Healthcare AI (Expanded Script) Beyond the technical challenges, there’s something even deeper we have to consider: Ethics. Responsibility. Trust. Because at the end of the day, AI in healthcare isn’t just about predictions or automation — it’s about real people’s health, lives, and dignity. And that means we have to be extra careful in how we design, use, and regulate AI systems in this space. Let’s walk through six of the most important ethical and responsible AI principles that are shaping this field — and why each one matters. ⚖️ 1. Bias and Fairness Again, imagine an AI system trained mostly on data from white patients. That system might Miss early signs of skin cancer on darker skin tones That’s not just unfair — it’s dangerous. So fairness isn’t optional. We need to actively audit our models for bias, test them across diverse populations, and design with inclusion in mind from day one. 🔒 2. Data Privacy and Consent Next: data privacy and consent. As we said, healthcare data is incredibly sensitive. So, when we use that data to train AI models, we have to be crystal clear: Was the data collected ethically? Did the patient give informed consent? Is the data being stored securely and used appropriately? This is especially important when data is shared across different institutions. 🔍 3. Transparency and Explainability Now let’s talk about transparency and explainability — two things that are often lacking in AI. In many cases, even the developers of an AI system can’t fully explain why it made a certain decision. That’s a problem. Because in healthcare, doctors and patients need to understand why a model is recommending a treatment, flagging a tumor, or rejecting a diagnosis. Would you feel comfortable following an AI’s medical advice if it can’t explain its reasoning? Probably not. And rightly so. So we need to build AI systems that are: Interpretable — showing which features led to a decision And transparent — so users can trust what’s happening behind the scenes 🧾 4. Accountability and Liability Here’s a tough question: What happens when an AI system makes a mistake? Let’s say it misses a cancer diagnosis. Or gives the wrong risk score. Or fails to detect an urgent condition. Who’s responsible? The developer who built the model? The hospital that deployed it? The doctor who followed its recommendation? Right now, the legal system is still catching up — and there’s no easy answer. But what’s clear is that someone must be accountable. We can’t allow AI to operate in a legal gray area. Patients deserve protection, and professionals need clarity about liability. So every healthcare AI system must come with clear guidelines about: Who oversees its performance Who approves its use And who takes responsibility when things go wrong 🧠 5. Autonomy vs. AI Assistance The goal is not to hand over decisions to the machine, but to combine the strengths of both: The speed, memory, and pattern recognition of AI With the intuition, experience, and ethical reasoning of humans This is called human-in-the-loop design — where AI assists, but never overrides, the final judgment of a qualified professional. 🌍 6. Access and Equity And finally — access and equity. We often assume that once we build a great tool, everyone can benefit from it equally. But that’s not how the real world works. If AI tools are only available in large hospitals, or only in wealthy countries, or only in English, then we risk widening existing healthcare gaps instead of closing them. Ethical AI must include designing for underserved communities, supporting low-resource clinics, and building solutions that work in different languages, cultures, and care settings. Because here equity isn’t just about fairness — it’s about making sure every person has a chance to benefit from medical innovation.
  • #12: Of course — here's a script for your timeline slide that ensures your audience is both impressed and informed, with moments of insight, inspiration, and even critical reflection. It follows a logical narrative and builds momentum toward the revolution that AlphaFold brought. 🎤 SCRIPT: Milestones in AI for Healthcare – Breakthroughs and Lessons “To really understand where we are today with AI in healthcare, we have to look at where we’ve been. Here’s a timeline of the most defining moments — some breakthroughs, some hard lessons — that have shaped the journey so far.” 🕰 1971 – INTERNIST-1 “It all began with INTERNIST-1, the first AI-based diagnostic system. It used rule-based logic and a medical knowledge base to mimic how an experienced physician reasons through a diagnosis. It was basic, but visionary — proving that machines could simulate human medical thinking.” 🕰 1986 – DXplain by MIT “A few years later, MIT developed DXplain, a system that not only listed possible diagnoses based on symptoms, but also provided explanations and educational material. It started with 500 diseases and now covers over 2,600 — used widely in medical training and clinical decision support even today.” 🕰 2011 – IBM Watson wins Jeopardy! “Fast forward to 2011 — Watson wins Jeopardy! This was huge. The AI beat human champions by understanding natural language and pulling answers from vast data. So the thinking was: If Watson can find the right answer from millions of possibilities in a quiz show… maybe it can help doctors choose the right treatment from thousands of diseases and drugs.” 🕰 2015 – IBM launches Watson for Oncology “That’s when Watson for Oncology was launched. Hospitals in countries like India and Thailand signed on, drawn to the idea of using AI to provide world-class cancer care in underserved regions. But here’s the hard truth: Watson didn’t deliver. It struggled with real-world medical data — which is messy, complex, and often ambiguous. It gave unsafe or impractical recommendations, and was heavily biased toward how one U.S. hospital practiced. In 2021, IBM sold off its Watson Health division — a quiet end to what was once the most high-profile promise in healthcare AI. It taught us something valuable: AI isn’t magic. It needs clean data, local context, and clinical trust.” 🕰 2016 – Google’s AI for Diabetic Retinopathy “Meanwhile, Google was training a deep learning model on retinal images to detect diabetic eye disease — using convolutional neural networks. When tested, it performed on par with licensed ophthalmologists. It showed us: deep learning could match experts in interpreting medical images — and maybe even scale expertise to places where specialists are scarce.” 🕰 2017 – Arterys gets FDA approval “In 2017, Arterys became one of the first AI tools approved by the FDA. It analyzed heart MRIs using deep learning — cutting the time to produce results from over half an hour… to just 15 seconds. That’s not just convenient. In emergencies, it’s life-changing.” 🕰 2020 – AlphaFold cracks the protein folding problem “And now — maybe the most jaw-dropping breakthrough in the timeline. For 50 years, scientists were trying to solve a biological puzzle: How does a protein fold into its 3D shape? 🧬 What is a protein, and what does “folding” mean? Proteins are the workhorses of your body — they do everything: Digest food Fight infections Build muscle Transmit signals in the brain And more... But a protein only works if it folds into the correct 3D shape. Think of a protein as a long string of beads (amino acids) — and depending on how it folds, it can become a scissors, a sponge, or a lock-and-key. The shape determines the function. 🧩 Why is folding such a big mystery? A single protein can fold in trillions of different ways. Even a small error in folding can lead to diseases like Alzheimer’s, Parkinson’s, or cystic fibrosis. Scientists knew the amino acid sequence (the “beads”) for many proteins — but figuring out the 3D shape required expensive, time-consuming experiments in the lab (X-ray crystallography, etc.) that could take years per protein. 🤯 Why AlphaFold was revolutionary In 2020, DeepMind’s AlphaFold model used AI to predict the 3D structure of a protein just from its amino acid sequence, with accuracy comparable to lab experiments — but in minutes instead of years. That meant: We could finally understand how proteins work We could design better drugs that bind to them We could explore new treatments for diseases tied to misfolded proteins And we could unlock secrets in biology, agriculture, and synthetic biology faster than ever before Demis Hassabis and John Jumper, were awarded the Nobel Prize in Chemistry. 🧪 Why this deserved the Nobel Prize Because AlphaFold didn’t just make research faster — it made whole new types of research possible. It helped: Design enzymes to break down plastic Study rare diseases with no known protein structures Predict interactions in the human immune system Improve the development of vaccines and treatments 🧠 A simple analogy: Imagine you had a magic microscope that could see the hidden machinery inside every cell in the world — instantly, and for free. That’s what AlphaFold gave scientists: a clear look into life’s inner machinery, no lab require It’s like trying to solve a Rubik’s Cube — blindfolded — with thousands of possible combinations. Then came AlphaFold from DeepMind. Using AI, it predicted protein structures with atomic-level accuracy, in minutes, for thousands of proteins — something that used to tIn 2024, the people behind AlphaFold, Demis Hassabis and John Jumper, were awarded the Nobel Prize in Chemistry. This wasn’t just an AI success. It was a scientific revolution — with ripple effects across medicine, drug discovery, and biology.”
  • #15: Let’s start by understanding the data we’ll be working with. This dataset comes from a clinical study conducted in the 1990s at the University of Wisconsin Hospitals. In it, researchers collected cell samples from breast lumps using a technique called fine needle aspiration, or FNA — which is shown in the image on the right. It’s a minimally invasive procedure where a thin needle is inserted into a lump, and a small number of cells are extracted for microscopic analysis. These cells are then stained, photographed, and analyzed to look for patterns that could indicate whether the tumor is malignant (cancerous) or benign (non-cancerous). You can see some of those cell samples on the left — the benign cells tend to look rounder, more regular, and less crowded. Malignant cells, in contrast, are irregular, more varied in shape, and often clustered in complex ways. Now here’s the key part: All of that complexity — all those subtle differences that pathologists would usually look at manually — were turned into numerical features so that machine learning algorithms could work with them. 🧠 So what exactly did the researchers measure? They extracted 10 core features from each image — all of which describe the shape, size, and texture of the cell nuclei: a) Radius – average distance from the center to the edge of the nucleus b) Texture – variation in pixel intensity; this captures how rough or smooth the surface looks c) Perimeter – the boundary length of the nucleus d) Area – how large the nucleus appears in the image e) Smoothness – how regular or jagged the edges are f) Compactness – a ratio that tells us how tightly packed the shape is g) Concavity – how deep the inward curves are along the edge h) Concave points – how many of those inward curves exist i) Symmetry – how mirrored the shape is j) Fractal dimension – a mathematical measure of edge complexity, similar to a “coastline” shape These features give us a rich, detailed description of the cells — which we can then use to train an AI model to recognize cancerous patterns. For each of those 10 original features, the researchers calculated three different values: Mean – the average value across all the nuclei in the image Standard error (SE) – a measure of uncertainty or variation in that feature Worst – which is actually the mean of the three largest values for that feature in the image This gives us a much fuller picture — not just what the average cell looks like, but also: How much variation there is, and What the most extreme or suspicious cells might look like in the same sample This kind of statistical layering allows the model to consider not only general trends but also outliers, which can be crucial in detecting aggressive tumors.
  • #16: 🔹 Slide Title: How Our Models Think: From Simple Rules to Forests 🎙️ New, Fully Tangible Script Now that we’ve seen our dataset and what each feature means — like the radius, texture, and symmetry of a tumor — let’s talk about how our models will actually use that information to make a diagnosis. We’re going to compare two different approaches: Logistic Regression Random Forest But before we dive into Random Forest, we first need to understand the building block behind it: a Decision Tree. 🌳 Step 1: Decision Tree Imagine you’re a doctor, and you’re diagnosing a patient. You might ask yourself a series of yes/no questions like this: Is the radius of the tumor above 17? If yes → then ask: Is the texture high too? If yes → maybe it’s malignant. If no → maybe it’s benign. If no → then ask: Is the symmetry very low? If yes → probably benign. If no → might still be malignant. That’s basically how a decision tree works. It takes all the features of the tumor and asks questions step by step, moving down branches until it reaches a final answer: benign or malignant. Each branch splits the data based on one rule — like "Is radius > 17?" — until we end up with a leaf node, which is the final prediction. It’s like playing 20 Questions, but the goal is diagnosis. 🌲 Step 2: Random Forest Now imagine instead of one decision tree, you had a forest full of them. Each tree: Looks at slightly different subsets of the data Chooses different features Builds its own rules Maybe one tree decides based on radius and texture. Another tree might focus on perimeter and concavity. A third tree might think symmetry is more important. Each one makes its own prediction. Then, all the trees vote, and the majority wins. That’s Random Forest — a group of diverse decision trees working together. Why is this better? One tree might overfit to small patterns or noise But when many trees agree, we get more reliable, less biased predictions So if most trees say “malignant,” we trust that. If most say “benign,” we trust that. This is especially useful in medicine, where we want models that are robust and stable, not easily fooled. ➗ Step 3: Logistic Regression Now let’s talk about our second model: Logistic Regression. This one works a bit differently. It doesn't ask a series of yes/no questions. Instead, it looks at all the features at once and tries to draw a boundary — like a line — that separates malignant tumors from benign ones. You can imagine it looking at a huge scatter plot of tumors: On one side: mostly benign cases On the other: mostly malignant Logistic Regression tries to find the best dividing line — or curve — that splits those two groups. Then, when a new case comes in, it asks: “Where does this tumor fall? On the malignant side or the benign side?” It also gives a probability — for example: “There’s a 78% chance this tumor is malignant.” So while it’s not as complex as a forest, it’s: Fast Simple And very interpretable Doctors can actually see which features influenced the result most — which is useful when you need explanations. 🧪 Why Use Both? By comparing these two models, we can see: How simple rules perform (logistic regression) Versus how a more complex, collective model performs (random forest) This helps us understand what kind of AI works best for healthcare tasks like cancer diagnosis — where the cost of a mistake can be very high.