Level‑5 Healthcare: Why Prescribing Will Decide When AI Becomes a Real Doctor

Every week seems to bring another paper or podcast trumpeting the rise of diagnostic AI. Google DeepMind’s latest pre‑print on its Articulate Medical Intelligence Explorer (AMIE) is a good example: the model aced a blinded OSCE against human clinicians, but its researchers still set restrictive guardrails, forbidding any individualized medical advice and routing every draft plan to an overseeing physician for sign‑off. In other words, even one of the most advanced AI clinical systems stops at Level 3–4 autonomy—perception, reasoning, and a recommended differential—then hands the wheel back to the doctor before the prescription is written.

Contrast that with the confidence you hear from Dr. Brian Anderson, CEO of the Coalition for Health AI (CHAI), on the Heart of Healthcare podcast. Asked whether software will soon go the full distance, he answers without hesitation: we’re “on the cusp of autonomous AI doctors that prescribe meds" (18:34 of the episode), and the legal questions are now "when," not "if". His optimism highlights a gap in today’s conversation and research. Much like the self‑driving‑car world, where Level 4 robo‑taxis still require a remote safety driver, clinical AI remains stuck below Level 5 because the authority to issue a lawful e‑script is still tethered to a human medical license.

Prescribing is the last mile of autonomy. Triage engines and diagnostic copilots already cover the cognitive tasks of gathering symptoms, ruling out red flags, and naming the likely condition. But until an agent can both calculate the lisinopril uptitration and transmit the order across NCPDP rails—instantly, safely, and under regulatory blessing—it will remain an impressive co‑pilot rather than a self‑driving doctor.

During my stint at Carbon Health, I saw that ~20 percent of urgent‑care encounters (and upwards of 60-70% during the pandemic) boiled down to a handful of low‑acuity diagnoses (upper‑respiratory infections, UTIs, conjunctivitis, rashes), each ending with a first‑line medication. External data echo the pattern: acute respiratory infections alone account for roughly 60 percent of all retail‑clinic visits. These are the encounters that a well‑trained autonomous agent could resolve end‑to‑end if it were allowed to both diagnose and prescribe.

Where an AI Doctor Could Start

Medication titration is a beachhead.

Chronic-disease dosing already follows algorithms baked into many guidelines.  The ACC/AHA hypertension playbook, for instance, tells clinicians to raise an ACE-inhibitor dose when average home systolic pressure stays above 130–139 mm Hg or diastolic above 80 mm Hg despite adherence.  In practice, those numeric triggers languish until a patient returns to the clinic or a provider happens to review them weeks later.  An autonomous agent that reads Bluetooth cuffs and recent labs could issue a 10-mg uptick the moment two out-of-range readings appear—no inbox ping, no phone tag. Because the input variables are structured and the dose boundaries are narrow, titration in theory aligns with FDA’s draft “locked algorithm with guardrails” pathway. 

Refills are administrative drag begging for automation.

Refill requests plus associated messages occupy about 20 % of primary care inbox items. Safety checks—labs, allergy lists, drug–drug interactions—are deterministic database look-ups. Pharmacist-run refill clinics already demonstrate that protocol-driven renewal can cut clinician workload without harming patients. An AI agent integrated with the EHR and a PBM switch can push a 90-day refill when guardrails pass; if not, route a task to the care team. Because the agent is extending an existing prescription rather than initiating therapy, regulators might view the risk as modest and amenable to a streamlined 510(k) or enforcement-discretion path, especially under the FDA’s 2025 draft guidance that explicitly calls out “continuation of established therapy” as a lower-risk SaMD use.

Minor‑Acute Prescriptions

Uncomplicated cystitis is an ideal condition for an autonomous prescriber because diagnosis rests on symptoms alone in women 18-50. Dysuria and frequency with no vaginal discharge yields >90 % post‑test probability, high enough that first‑line antibiotics are routinely prescribed without a urine culture.

Because the diagnostic threshold is symptom‑based and the therapy a narrow‑spectrum drug with well‑known contraindications, a software agent can capture the entire workflow: collect the symptom triad, confirm the absence of red‑flag modifiers such as pregnancy or flank pain, run a drug‑allergy check, and write the 100 mg nitrofurantoin script, escalating when red flags (flank pain, recurrent UTI) appear.

Amazon Clinic already charges $29 for chat‑based UTI visits, but every case still ends with a clinician scrolling through a template and clicking “Send.” Replace that final click with an FDA‑cleared autonomous prescriber and the marginal cost collapses to near-zero.

What unites titrations, refills, and symptom‑driven UTI care is bounded variance and digital exhaust. Each fits a rules engine wrapped with machine‑learning nuance and fenced by immutable safety stops—the very architecture the new FDA draft guidance and White House AI Action Plan envision. If autonomous prescribing cannot begin here, it is hard to see where it can begin at all.

The Emerging Regulatory On‑Ramp

When software merely flags disease, it lives in the “clinical‑decision support” lane: the clinician can still read the chart, double‑check the logic, and decide whether to act. The moment the same code pushes an order straight down the NCPDP SCRIPT rail it graduates to a therapeutic‑control SaMD, and the bar rises. FDA’s draft guidance on AI‑enabled device software (issued 6 January 2025) spells out the higher bar. It asks sponsors for a comprehensive risk file that itemizes hazards such as “wrong drug, wrong patient, dose miscalculation” and explains the guard‑rails that block them. It also demands “objective evidence that the device performs predictably and reliably in the target population." For an autonomous prescriber, that likely means a prospective, subgroup‑powered study that looks not just at diagnostic accuracy but at clinical endpoints—blood‑pressure control, adverse‑event rates, antibiotic stewardship—because the software has taken over the act that actually changes the patient’s physiology.

FDA already reviews closed‑loop dossiers, thanks to insulin‑therapy‑adjustment devices. The insulin rule at 21 CFR 862.1358 classifies these controllers as Class II but layers them with special controls: dose ceilings, automatic shut-off if data disappear, and validation that patients understand the algorithm’s advice. A triage‑diagnose‑prescribe agent could follow the same "closed-loop" logic. The draft AI guidance even offers a regulatory escape hatch for the inevitable updates: sponsors may file a Predetermined Change Control Plan so new drug‑interaction tables or revised dose caps can roll out without a fresh 510(k) as long as regression tests and a live dashboard show no safety drift.

Federal clearance, however, only opens the front gate. State practice acts govern who may prescribe. Idaho’s 2018 pharmacy‑practice revision lets pharmacists both diagnose influenza and prescribe oseltamivir on the spot, proving lawmakers will grant new prescriptive authority when access and safety align. California has gone the other way, passing AB 3030, which forces any clinic using generative AI for patient‑specific communication to declare the fact and provide a human fallback, signaling that state boards expect direct oversight of autonomous interactions. The 50-state mosaic, not the FDA, may be the hardest regulatory hurdle to cross.

Why It Isn’t Science Fiction

Skeptics argue that regulators will never let software write a prescription. But autonomous medication control is already on the market—inside every modern diabetes closed‑loop system. I have come to appreciate this technology as a board member of Tandem Diabetes Care over the last few years. Tandem’s t:slim X2 pump with Control‑IQ links a CGM to a dose‑calculating algorithm that micro‑boluses insulin every five minutes. The system runs unsupervised once prescribed with fenced autonomy in a narrowly characterized domain, enforced machine‑readable guardrails, and continuous post‑market telemetry to detect drift.

Translate that paradigm to primary‑care prescribing and the lift could be more incremental than radical. Adjusting lisinopril involves far fewer variables than real‑time insulin dosing. Refilling metformin after a clean creatinine panel is a lower‑risk call than titrating rapid‑acting insulin. If regulators were satisfied that a closed‑loop algorithm could make life‑critical dosing decisions, it is reasonable with equivalent evidence to believe they will approve an AI that nudges antihypertensives quarterly or issues amoxicillin when a CLIA‑waived strep test flashes positive. The path is the same: bounded indication, prospective trials, immutable guardrails, and a live data feed back to the manufacturer and FDA.

Closed‑loop diabetes technology did not replace endocrinologists; it freed them from alert fatigue and let them focus on edge cases. A prescribing‑capable AI agent could do the same for primary care, starting with the arithmetic medicine that dominates chronic management and low‑acuity urgent care, and expanding only as real‑world data prove its worth. Once the first agent crosses that regulatory bridge, the remaining span may feel as straightforward as the insulin pump’s development and adoption looked in retrospect.

The diagnostic revolution has taught machines to point at the problem. The next leap is letting them reach for the prescription pad within carefully coded guardrails. Titrations, refills, and simple infections are the logical, high‑volume footholds. With Washington signaling an interest in AI for healthcare, the biggest barriers may be other downstream issues like medical liability and reimbursement. That said, once the first FDA‑cleared AI issues a legitimate prescription on its own, it may only be a matter of time when waiting rooms and wait lists shrink to fit the care that truly requires a human touch.

Raj Iyer

Director, EYP | Corporate Development | M&A | Product Strategy & Operations | AI Enthusiast | GTM & Strategic Partnerships | Specialized in Healthcare | Experience with B2B2C, Tools | Former Dentist | Fitness Aficionado

21h

Really enjoyed this Myoung Cha, it’s refreshing to see the “AI doctor” conversation tied to clear, real-world use cases like titrations, refills, and simple infections instead of just blue-sky predictions. The comparison to closed-loop insulin systems makes the path forward feel tangible. I like how you framed autonomy not as replacing clinicians, but as removing the low-value, repetitive work so they can focus on the tougher cases that truly need their expertise. It’ll be interesting to see how policy, reimbursement, and liability adapt so we can put these capabilities to work safely and at scale.

Howard Willson, MD, MBA

Building tech and improving patient care

1w

Interesting, Myoung. This got me thinking about other high-stakes technologies — aviation, self-driving cars, power grids — where full level 5 autonomy has been the goal for decades but still hasn’t been achieved. Yet prescribing — arguably just as consequential — may be the first to reach that level. Says a lot about how far and fast clinical AI is moving.

Like
Reply
Nathan Gunn, MD

Physician-Founder | CEO, SecondLook Health | AI Engine Turning Complex Clinical Records into Actionable Insights for Insurers, Hospitals & Legal Teams

1w

Outstanding article!

I think AI should be used as a tool to support our analysis: sometimes we do not have access to huge datasets or, if available, we - as humans - are not able to connect dots coming from complex scenarios and find similarities easily. What do you think Myoung?

Hamid E. Nia, DO, MBA, MS, FACEP

Emergency Physician | Regional Medical Director | Clinical Informatics Leader | Driving AI & EHR Innovation in Patient Care

2w

Very insightful and very well written Myoung Cha. There’s definitely a bright and likely future for expanded access with artificial intelligence as you described it. It puts a lot of agency in the hands of the patients, which is overall good and opens a lot of of opportunity. One thing less discussed is the change in behavior from all parties as we lower the barriers to deploy these tools. It could be Jevon’s Paradox, increased efficiency leading to increased access leading to increased utilization. Services that demonstrate true meaningful use will be poised reshape the way healthcare is delivered for decades.

To view or add a comment, sign in

Others also viewed

Explore topics