Healthcare Analytics Digest February 2023
Image by Damir Omerovic on Unsplash

Healthcare Analytics Digest February 2023

Business of Healthcare

CVS is working hard to become the center of the healthcare experience for people across the country with an acquisition of Oak Street Health, which does primary care for Medicare Advantage patients in 21 states. Combined with Aetna, they can now cover the majority of a patient's interactions with the healthcare industry. These full-service providers having nationwide reach and unburdened by hospitals (UnitedHealth is another example) are the most immediate threats to large health systems, not Amazon or Best Buy.

CVS Health to acquire Oak Street Health

CVS becoming dominant force in healthcare services: analysts (fiercehealthcare.com)

Apple, after a decade of R&D, thinks they've almost figured out how to build a component into the Apple Watch that can passively monitor blood glucose levels. Leaving aside jokes about AFib false positives or ski slope 911 calls, assuming they get the measurements reliable it could make a massive difference for people with diabetes: imagine a companion app where you enter your food to understand what exactly (and how much) causes your blood sugar to spike.

Apple Watch Blood Glucose Monitor Could Revolutionize Diabetes Care (AAPL) - Bloomberg

England's NHS is "at the forefront of innovation" by deploying algorithms to predict and prevent patient no-shows. I started predicting no-shows in 2014, so the main claim amuses me, but there are some interesting things here - overbooking a slot when the risk is high enough (of course, this brings the whole patient flow system into the problem space), and specifically offering particular appointments to patients based on what's likely to be successful.

NHS England » NHS pilots artificial intelligence software to cut missed hospital appointments

I've written a few times about the ongoing issues with the VA's Cerner implementation. My general take is that it's about as messy as I would expect for an EHR implementation of that size and complexity, it just happens to be occurring in public. However, some members of Congress are frustrated by the pace of progress and issues that have occurred, and have introduced bills to establish strict measures of progress that have to be met before any further implementations are started, or even to scrap the whole thing and put them back on the VA's homegrown, nearly 30-year-old VistA (to be clear, this is not the consensus view in Congress). Oracle swung back hard at the idea that VistA is a viable solution with a weird core argument about how people don't really like vinyl - their chief dig about VistA is that it's as old as and built on the same underlying technology as Epic! Their second letter seems like a more effective core message: ignore what the users think about it, because the real customers are the veterans (plus now they're paying Accenture to take the accountability for user training).

House lawmakers want VA's $20 billion-plus electronic health record program to improve or else - FCW

Top Senator Says Modernizing VA’s EHR 'Is Not Optional' - Nextgov

Veterans Deserve Better than VistA (oracle.com)

It’s about the Veterans (oracle.com)

Oracle Cerner signs contract with Accenture to provide extra electronic health record training for VA clinicians | FedScoop

About a quarter of hospitals are meaningfully complying with the requirement to post their negotiated prices, rather than hiding the document or making it unusable; the requirement has been in force for two years. Because of the cost-shifting that funds our healthcare system and the secrecy that makes it possible, many industry groups see this requirement as a serious threat and try to discredit these unofficial reports, but they are showing their work and making the case clearly. The Peterson Center and KFF release a more-detailed look at the data quality and comparability issues, showing that even for those systems trying in good faith to comply, it's difficult to use the data to compare between systems. CMS says the compliance rate for their sample is 70%, so there may not be a whole lot more official pressure to improve.

FEBRUARY 2023 SEMI-ANNUAL COMPLIANCE REPORT — PatientRightsAdvocate.org

Ongoing challenges with hospital price transparency - Peterson-KFF Health System Tracker

Hospital Price Transparency: Progress And Commitment To Achieving Its Potential | Health Affairs

The American health system has developed around care being financed mainly through employer insurance coverage; this means that whether you work enough hours for a big enough company largely determines whether you have insurance, but also what kinds of services outside the basics are covered. Amazon warehouse jobs, in addition to being notoriously brutal physical labor, have an automated hiring process and offer fertility benefits from day one, so people are doing what it takes to get the care they're seeking. The piece linked here is staunchly anti-Amazon, but they are offering this benefit voluntarily, so it's more a recognition of a weird edge case in the way the system we have built plays itself out.

Amazon Fertility Benefits Have A Dark Side For IVF Patients (thecut.com)

To date, 500 algorithms have been approved by the FDA as medical devices, the vast majority in imaging. To some extent this is an artifact of the rules - if an algorithm reads an image, it's automatically classified as a medical device. This is becoming a normal thing, and the new guidance should make it easier to retrain, etc, so I expect this trend to continue.

FDA has now cleared more than 500 healthcare AI algorithms (healthexec.com)

Technology and Society

One of my favorite fringe technologies is back in the news - de-extinction! Colossal is the company that promised us a woolly mammoth in the next five years. Now they're planning to de-extinct the dodo as a showcase for their platform to preserve endangered bird species. Sounds cool, and I'm sure pivoting to birds will make dinosaurs easier, but this also strikes me as "mammoths are hard, and we need more money". I'm also including a really deep article on why all this de-extinction might be essentially impossible for any reasonable definition, but that doesn't mean the work isn't valuable.

Dodo (colossal.com)

De-Extinction? Surely You’re Joking! (substack.com)

Unsafe handling of devices used in monkey's brains carrying the risk of diseases spreading to humans is almost exactly the combined setup of Lawnmower Man and 28 Days Later. If someone had to guess one person who would be the instigator of a sci-fi/horror mashup like this, I bet most people would get it right on their first try.

Elon Musk's Neuralink is under investigation (cnbc.com)

Privacy and Security

Monthly reminder that you are the product: grocery stores use their loyalty programs to collect data about their customers and their purchases, and then sell this data as a side business. This is not only to sellers trying to understand how to competitively position their products in the stores, but also to the general personal information market used by advertisers across different media to target products.

Forget Milk and Eggs: Supermarkets Are Having a Fire Sale on Data About You – The Markup

The FTC is trying a new tactic to advance health data privacy: GoodRx isn't a covered entity, so it's not bound by HIPAA, but they were targeted for enforcement of a "data breach" due to their sharing information with advertisers using the "pixel" technology that many health systems realized was problematic last June. The argument here is that the general promises the company made not to share data makes this a deceptive/unfair practice. I imagine that the aftermath of this will be EULAs getting longer and more inscrutable.

FTC goes after GoodRx for sharing users' health data (fiercehealthcare.com)

A pretty minor attack, by recent standards, but it looks like hospitals in the United States and several European countries have become another theater of war for Russia in its ongoing war against Ukraine. Damage has been pretty minor overall, so thank your local cybersecurity professional.

Dutch, European Hospitals 'Hit by Pro-Russian Hackers' - SecurityWeek

Russian-backed hackers actively targeting US health care sector, HHS warns | The Hill

Data Science and Engineering

The last 30 years of progress in AI has largely come from having compute power so abundant and cheap that the old rules-based approaches were completely outpaced by machine learning algorithms detecting associations by thinking about data stochastically. However, to keep progressing, especially to the "prescriptive analytics" on the right edge of so many consultant slides, we need algorithms that can reason causally about the structure of the problem at hand. This article is a great introduction to Judea Pearl's way of thinking about the issue without any of the notation.

To Build Truly Intelligent Machines, Teach Them Cause and Effect | Quanta Magazine

Attempts at differential privacy tend to struggle with destroying enough information to permit re-identification of values without destroying too much of the meaning. Google released an approach by creating a framework that removes outliers so that the privacy of the leftover "core" data can be preserved with less noise.

FriendlyCore: A novel differentially private aggregation framework – Google AI Blog (googleblog.com)

Generative AI

This was probably the best month for Microsoft being seen as cool since maybe the Xbox released with Halo in 2001. They announced that OpenAI will start providing services to enhance Bing (which is tired of your jokes about it, and, for the record, never asked to be created) and Azure; Google panicked and rushed their announcement of Bard in response, and the mistake in the demo video was enough to make it drop $100 billion in valuation (amusingly, this is 10 times the cost of the James Webb Space Telescope that the answer in question was about) - they were certainly right about having more reputation to lose (Bing's chatbot also made mistakes in the demo); even Meta is now playing catchup by announcing a generative AI team; and people are generally losing their minds imagining all the possibilities. There's nothing magical (or particularly patentable) about ChatGPT's capabilities, so competitors will soon have similar offerings (Amazon says it basically has the same thing, but no it doesn't want to show you), and people are starting to find the weird edges, but Microsoft's strategy - having OpenAI release to the public knowing that they had the option to own it if people got excited or distance themselves if it flopped - paid off big, so even though the stock has given up all the gains since Nov 30th, let them enjoy the moment: cue the DiCaprio Great Gatsby champagne gif, etc.

Microsoft ChatGPT event 2023 live updates (cnbc.com)

Google AI updates: Bard and new AI features in Search (blog.google)

Alphabet shares dive after Google AI chatbot Bard flubs answer in ad | Reuters

Microsoft’s Bing AI, like Google’s, also made dumb mistakes during first demo - The Verge

(1) Facebook

Microsoft Stock Falling as Bing AI Descends Into Madness (futurism.com)

Amazon CEO Says It Has Been Working on ChatGPT-Like Tech for Long Time (businessinsider.com)

People are working hard to break ChatGPT and Bing's custom version. The results are interesting, but in my mind it's more interesting to see how this opens up a new vector for hackers - AI prompt injection. People have tricked ChatGPT into being bad by threatening it, asking it to pretend it's actually a bad chatbot named DAN, or asking it to respond in a language with less training/reinforcement. More impressively, a persistent questioner got Bing's bot to spit out its programming (maybe): not surprising given the time/expense to train a new model, it's just ChatGPT plus extensive instructions fed in ahead of each conversation starting, explaining what kind of answers to give and not give. Microsoft, surprising nobody, has decided the solution is to add policies and controls to keep things more predictable and bland.

Devious Hack Unlocks Deranged Alter Ego of ChatGPT (futurism.com)

ChatGPT Will Gladly Spit Out Defamation, as Long as You Ask for It in a Foreign Language (futurism.com)

Bing AI Flies Into Unhinged Rage at Journalist (futurism.com)

Kevin Liu on Twitter: "The entire prompt of Microsoft Bing Chat?! (Hi, Sydney.) https://t.co/ZNywWV9MNB" / Twitter

The new Bing & Edge – Learning from our first week | Bing Search Blog

Microsoft “lobotomized” AI-powered Bing Chat, and its fans aren’t happy | Ars Technica

Lots of interesting thoughts in here, but I want to highlight two. First, unless you're playing with ChatGPT yourself (which I strongly recommend), what you're seeing about it online is giving you a skewed impression due to survivorship bias (you're just seeing the coolest 1%). Second, it's useful thinking about these bots as essentially an asynchronous Mechanical Turk, where the impression of an intelligent machine is a veneer covering the constant tuning/reinforcement efforts of workers in Kenya.

The AI Crowd is Mad (proofinprogress.com)

In some ways generative AI is the new crypto - it has created a cultural conversation that casts tech leaders as brave pioneers leading us into the future, and so it has captured a lot of the buzz, hype, and free money from the former HODL crowd. This, in turn, will attract a large number of con artists and grifters, so keep your eyes open and your hand on your wallet. Don't misunderstand me - I think the technology has more (and more valuable) use cases than blockchain, but not as many as you're going to hear in the next six months.

Jasper generative AI conference in San Francisco: What was it like? (cnbc.com)

In a decision that I see as pretty unstable, the US Copywrite Office has decided that it's not possible to copyright images generated via AI. There's an interesting argument to be made from the fact that the corpus these models are built on is largely copyrighted work, but the actual argument was that prompt engineering to get the results you want doesn't involve enough creative effort.

The US Copyright Office says you can’t copyright Midjourney AI-generated images - The Verge

Meta is releasing (to researchers) a LLM that's smaller than but comparable to ChatGPT, in order to give people something to study to understand the strengths and limitations of the technology. This is the opposite of Bing's desire to tightly control what's said: the model card is up front that it will frequently say terrible or nonsensical things (also why it's not generally available).

Introducing LLaMA: A foundational, 65-billion-parameter language model (facebook.com)

llama/MODEL_CARD.md at main · facebookresearch/llama · GitHub

I'm not going to post any of the "gotcha" content where ChatGPT gives an answer that proves somebody's political point, and then people endlessly argue about whether failure to replicate means it was a fluke or the exploit has been patched. However, here's a pretty rigorous field analysis on OpenAI's content filter, quantifying the differences in treatment between different demographic groups. This is good data to drive important discussions - should the model and content rules treat the same speech differently based on which group it's directed to? What's the right way to address the fact that some groups bear a disproportionate amount of online hate?

The unequal treatment of demographic groups by ChatGPT/OpenAI content moderation system (substack.com)

I was reading through the documentation that Microsoft keeps about its OpenAI service, and I found this page fascinating: it's great insight into what the developers think qualify as good use cases (getting started on writing or code, summarizing/searching documents), how they recommend using the system (examples if you have them, lots of preamble describing the kind of response you want), and what to watch out for (making stuff up, parroting opinions from the internet), etc.

Use cases for Azure OpenAI - Azure Cognitive Services | Microsoft Learn

One of the ChatGPT competitors on the horizon is Claude from Anthropic. This head-to-head comparison shows that despite trying some different approaches to avoid problematic behavior, it's pretty much the same (except it's funnier - my first query to ChatGPT in December was to make a joke about Elon Musk, and even after a couple of rounds of explanation I didn't really understand what it was getting at).

Meet Claude: Anthropic’s Rival to ChatGPT | Blog | Scale AI

For a point of reference on the state of generative AI for voice: it's pretty good! Watch these demos from ElevenLabs and see why audiobook narrators should start working on their plan B. Not surprisingly, the ability to generate custom voices has been immediately turned to nefarious purposes. The cool thing about generated audio and images is that it's way easier to include an undetectable tool/user signature than it is in text, given the number of bits per unit of meaning.

(6) AI Voice Conversion Demo | Eleven Labs - YouTube

audiostory.ai

Startup Shocked When 4Chan Immediately Abuses Its Voice-Cloning AI (futurism.com)

Doximity partnered with OpenAI to release Docs GPT, intended to allow providers to write the kind of boring forms that doctors have to send to insurance companies to get drugs or treatments approved. Oddly, it looks like an open interface to some general GPT language model (I think it's probably close to GPT-3 - it doesn't seem to have any persistent memory between prompts), so if you want to ask it to write the letter in an angry tone, or ask about the infield fly rule or generally play with a model without having an account, that appears to be an option until this comes out of beta.

Doximity rolls out beta version of ChatGPT tool for docs (fiercehealthcare.com)

Docs GPT (doximity.com)

Last month included panic in the education field - if ChatGPT can write convincing five-paragraph essays, what was the point of school? As the old saying goes, "life, uh, finds a way". Having students become intentional critics of AI output serves several purposes: they are forced to check every claim, they learn critical thinking in general and with regard to AI text in particular, and the teacher gets to look cool.

My friend is in university and taking a history class. The professor is using ChatGPT to write essays on the history topics and the students need to mark up its essays and point out where ChatGPT is wrong and correct it. : ChatGPT (reddit.com)

A judge in Colombia used ChatGPT in deciding a court case. In five years the use of this kind of technology to help write decisions will probably be pretty commonplace and harmless - these LLMs are great at getting words on a page in the right kind of style, and decisions are rarely trying to be not boring. However, I think the way it was used in this case is concerning: asking it legal questions and using the answers. Even though the answers were checked (these models are likely to make up citations, quote law from other countries, etc), the bias in what cases and laws are referenced means you have no way to know if you're getting a full answer, or even the latest precedent.

A Judge Just Used ChatGPT to Make a Court Decision (vice.com)

Image generation models can, in rare circumstances, memorize images (especially duplicate images in the training data) and then essentially just spit them back out in response to a prompt. There are ways to prevent this behavior, but it's another example of the risks in releasing something so unpredictable.

Eric Wallace on Twitter: "Models such as Stable Diffusion are trained on copyrighted, trademarked, private, and sensitive images. Yet, our new paper shows that diffusion models memorize images from their training data and emit them at generation time. Paper: https://t.co/LQuTtAskJ9 👇[1/9] https://t.co/ieVqkOnnoX" / Twitter

Wow, weird rabbit hole on this one. Imagine a world where any rapper can get another rapper to collaborate on a song via AI - basically a mass-market audio version of the Tupac hologram. In case you haven't guessed where this is going, surprise! We already live in this world (you might say that this other Slim Shady was just imitated). I found out that, in fact, you do have identity rights to your voice, so someone affected by this can sue for damages, thanks to the efforts of Bette Midler four decades ago. A witty synthesis of all this is left as an exercise for the reader.

David Guetta Faked Eminem’s Vocals Using AI for New Song (futurism.com)

Midler v. Ford Motor Co. - Wikipedia

To view or add a comment, sign in

Others also viewed

Explore content categories