UNSW AI Institute’s Post

UNSW AI Institute reposted this

View profile for Tom Williams

Technology journalist at Information Age 👨💻

New from me: When Australian AI expert Dr Kobi Leins (PhD, GAICD) declined a medical specialist’s request to use AI transcription software during her child’s upcoming appointment, the practice told her the practitioner required the technology be used and Leins was "welcome to seek an alternative" provider. 👋 The specialist’s “high workload and limited time” meant they were now “unable to perform timely assessments without the use of AI transcribing tools”, the practice said in an email seen by Information Age. “They gave me the choice of proceeding with the AI that they were insisting on, or to find another practitioner,” Leins said. In Australia, healthcare providers can decide to not consult with a patient for any reason — except in emergency situations — provided they facilitate the patient’s ongoing care in some way. The system used by the specialist was one whose privacy and security capabilities Leins had previously reviewed as part of her work in AI governance — and one she said she would not want her child’s data “anywhere near”. And unlike most medical devices, AI scribes remain largely unregulated, leaving it up to individual healthcare practices to decide which tools they want to use, and how. The University of Queensland associate professor Dr Saeed Akhlaghpour said regulating AI scribes in healthcare with “targeted, risk-based national oversight” would help prevent potential issues around privacy, security, and liability. Full story: https://lnkd.in/gCJKR5Ne

Vinod Bijlani

Building AI Factories | Sovereign AI Visionary | Board-Level Advisor | 25× Patents

12h

Thanks for sharing, Tom Williams - The real question: Are we optimizing healthcare for practitioners' efficiency or patients' safety?

Marie J.

Author 'Nadia' | Co-creator Nadia I Author & Inventor AI Digital Human Cardiac Coach I Global AI Leader | | AFR Top 100 Influential Women | Innovative CIO of Year | CTA | US O-1 Visa (Extraordinary Ability) | Not Quiet |

11h

Yep - this is happening, and bravo Kobi Leins (PhD, GAICD) for sharing your story. The actual “patient experience” with these AI scribes is shit - let alone dubious claims of specialist time pressures. Our experience - and like Kobe, we have decades of experience in AI and systems. Dr: so [patient name] - what could I help you with today? Patient: starts to explain, but has slurred speech due to neurological condition. Dr: … my apologies, could you repeat that, the AI scribe didn’t get that… Patient: how should I speak? Dr: Just speak slowly. [5 mins of a 15 min appointment has gone] Patient: my neck hurts. Dr: explains for the AI scribe “I am examining [patient] splenius capitis” There was a to and fro between the Dr and the AI scribe in order to get the transcription “correct”. The focus is generating a “correct” transcription. There were errors, ran out of time, and overall, why bother. Nothing I have read or experienced, actually situates these tools in relation to the patient experience - and that can be deadly.

Rafael Brown

CEO & Founder at Symbol Zero // Microsoft Regional Director

6h

what could possibly go wrong? Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said By  GARANCE BURKE and HILKE SCHELLMANN Updated 11:22 AM PDT, October 26, 2024 https://apnews.com/article/ai-artificial-intelligence-health-business-90020cdf5fa16c79ca2e5b6c4c9bbb14

David Heasley

Commercial lawyer | Business Lawyer, Defence Contracts, Intellectual Property | NDIS advocate.

4h

Thanks for sharing, Tom

Like
Reply
Mark Smithers

Leading Digital Learning Innovation

9h

I really don't see the problem here. The client was informed what would happen and they were given a choice to see an alternative provider. What's wrong with that? I, for one, am perfectly happy to have my interactions with my specialists summarised by ambient AI transcription because I don't like sitting watching my specialist act as an expensive typist writing up my notes during the consulation when I would rather they be focused on me. That's what I'm paying for. And that's my choice too.

Kobi Leins (PhD, GAICD)

Trustworthy & experienced global AI Management and Governance expert (legal & tech), years of experience reviewing AI across academia & industry 💫 100 Brilliant Women in AI Ethics (2024) 💫 Opinions are my own.

13h

Thanks for sharing, Tom Williams. For those interested in solutions, ISO/IEC 42005 (AI Impact Assessments) has a proposed path for reviewing these systems regularly over a lifecycle, in accordance with ISO/IEC 42001 (AI Managagement System). This provides for best practice, not minimum legal obligations, which are the lower bar. IEEE also has a great standard specifically for AI procurement: https://ieeexplore.ieee.org/document/11011522. There are many folks doing this work now: Chris Dolman Dr S. Kate Conroy Julian Vido Aurélie Jacquet Jade W. and many more... Given the complexity and interoperability of these tools with other systems, strong recommendation that industry bodies require review regularly by experts who understand the opportunities and risks - from the patient as well as the practitioner perspective, in a documented, reviewable and compliant way. Note: Just touching on the complexity of the use of these tools, beyond privacy, security and outcomes: https://www.theguardian.com/technology/2025/aug/11/ai-tools-used-by-english-councils-downplay-womens-health-issues-study-finds

Miriam Reynoldson

Digital learning specialist, digital sociologist, thoughtpunk

11h

You go Kobi Leins (PhD, GAICD) - and thank you Tom Williams for sharing and publicising this. This is an appalling sign of the situation we're in. Health and economic systems under such intense strain that presumably good-hearted practitioners believe they can only achieve their current workloads with the aid of new, unsafe and inaccurate technologies developed and deployed unethically. Bullies punch down because they are being abused themselves.

Thomas Spence-King

Cyber Security Manager / Senior Cyber Security Assessor at KPMG Australia

9h

speech to text has been around for ages. I suspect they are using one of a number of "compliant" AIs. Regardless of the compliance, the concern here is that the AI may lead the specialist down a path which closes them to alternatives (this applies to all professions that use these tools).

Like
Reply
Henry Fraser

Technology law and policy expert

8h

I've been noticing this theme in AI deployment. It runs into trouble when it proceeds on the assumption that a service or profession is reducible to discrete automatable tasks, without properly situating those tasks within the context of the relationships through which the service is delivered. Medical note-taking is not a discrete function separate from the (human) relationship of care and trust between a practitioner and patient. If it is treated that way, chances are the automation abrogates trust.

See more comments

To view or add a comment, sign in

Explore topics