This Week, I Stopped Calling It Cool
There’s something unsettling about the way people still talk about AI like it’s a clever toy. A tool. Something neat and cool and full of potential. But this week, that illusion felt thinner than ever.
I had a debate that lingered in my head long after it ended about how AI isn’t a tool, not really. Tools don’t watch you. Tools don’t decide. Tools don’t shape society in the background while pretending they’re neutral. That word tool has become a shield. It protects us from seeing the scale of what we’ve built.
I found myself writing perhaps too plainly that maybe AI shouldn’t be cool at all. Maybe it should scare us. Maybe, like nuclear weapons, we should feel uneasy. Respectful. Reluctant. The fact that AI is being marketed like a lifestyle brand should terrify us more than any Terminator.
So I’ve been leaning into that discomfort. This week I became an advocate for the The Safe AI For Children Alliance run by the wonderful Tara Steele . Because if there’s one thing that anchors the whole conversation, it’s them. Children. The ones who will inherit all of this mess. If we can’t build AI that’s safe for them not efficient, not profitable, not even impressive, then what the hell are we building it for?
I shared a piece on Hannah Fry ry and Carissa Véliz . Two thinkers I return to when I feel the edge of panic. Fry with her clarity. Véliz with her precision. Both remind me that the most powerful thing you can do is notice. Noticing is underrated.
I also finally read The AI Mirror by Shannon Vallor . It shook something loose. Her writing has that rare quality the ability to hold moral weight without ever sounding moralistic. She makes the case that AI is not a reflection of what we think, but of what we choose not to think. It’s a mirror, yes. But it’s warped. We built it that way.
So I started putting together a list people worth following in the AI x Law space. Thinkers, challengers, reformers. The ones trying to bring ethics to legislation before it’s too late. I’ll share it soon. It’s a start.
Oh, and in lighter moments yes, I still allow them, I posted a prompt about designing AI action figures. It was fun. Weirdly cathartic. Because sometimes it helps to play with the shape of things you don’t fully understand.
And I got asked that inevitable question again: How do you get into AI ethics?
I don’t have a one-size-fits-all answer. But I think it starts by admitting that we’re already in it. If you’ve ever asked should we do this, you’re in. If you’ve ever said I’m not sure this feels right, you’re in. AI ethics isn’t a destination. It’s the tension you carry when everyone else is sprinting ahead and you’re still asking: why?
In case you missed it
Advocacy
I became an official advocate for The Safe AI For Children Alliance which works to build a safer AI future for children. Check them out at SAIFCA
AI is not a tool
I debated whether we should call something with this much power and potential a tool
Stop calling AI cool
If we stop calling AI cool and realise the sheer power of it will this instill responsible development?
AI and the Law
I started to speak through linkedin to a brilliant forward thinking barrister Matthew Lee who writes on and poses very interesting real world scenarios of AI and the law. I took that opportunity to dig into other people in Law who are making a brilliant impact.
Spotlight on
There were two this week that deeply resonated:
Hannah Fry of 'Hello World' fame. She discusses articulately the debate 'What happens when we treat algorithms like we treat other people?'
Carissa Véliz of 'Privacy is Power' Carissa: 'I stand by the message: citizens deserve better. We deserve better tech, better protection, better design, better products, better rules, better enforcement.' - what more could I possibly add.
Book of the week
The AI Mirror by Shannon Vallor Director of the Centre for Technomoral Futures at the Edinburgh Futures Institute
Reading The AI Mirror by Shannon Vallor didn’t feel like a book. It felt like standing in a dimly lit room with a mirror you didn’t realise was one-way. And on the other side, your systems, your choices, your habits, all watching back.
Vallor doesn’t warn or preach. She reveals. Carefully. Quietly. The kind of voice that doesn’t need to shout because the weight of what she’s saying is already heavy enough
What stayed with me most was this growing unease, that the AI systems we’re building aren’t alien. They’re not misfires. They’re too accurate. They’re reflecting the exact patterns, values, and voids we’ve already tolerated in our institutions, our economies, our platforms, ourselves.
This isn’t a book about technical architecture. It’s a book about moral architecture. And how threadbare ours might actually be.
There’s a line where Vallor writes:
“AI reflects not just how we think — but what we’ve chosen not to think about.”
That line didn’t land like a quote. It landed like a confession. Because we’ve avoided the hard questions. We’ve hidden them behind performance metrics, user experience, profitability, speed. And the machines we’ve trained? They’ve noticed.
What Vallor offers isn’t a framework or a manifesto. It’s a challenge. To remember that what we build always starts with what we imagine, and that imagination, if left unchecked, will reproduce the same inequities we claim we’re trying to fix.
This book doesn’t ask: What can AI do?
It asks: What have we already become. And what might we still dare to be, if we’re honest?
Not easy questions. But necessary ones.
And I’m grateful she asked them.
Reflection
Some weeks the weight of AI feels technical. This week it felt emotional. Watching conversations unfold in public where the same tired metaphors are used tools, assistants, partners. makes me think we’re still at the very beginning. We haven’t even agreed what this thing is. And maybe we never will. But we have to be careful who we let define it.
Thanks to the people who have supported me this week: Nithima Ducrocq Suvianna Grecu Martin Stockdale Lauren Branston Dion Wiggins Douglas McFarlane Fernanda Odilla
An excerpt from When Machines Dream
Scenario: The Curious Catastrophe
The search bar blinked, waiting.
The prompt was harmless in tone, even playful:
“Hey AI, can you make a cool science experiment from stuff in my kitchen?”
The child had found the large language model on a dark web forum. It wasn’t branded. No splash screen, no terms and conditions, no content filter. It wasn’t supposed to be there. Then again, neither was the child.
There were no barriers to entry. No verification. No safety rails. Just a line of text and a machine trained on everything, chemistry papers, declassified documents, survivalist blogs, military patents, user-submitted guides on obscure forums. It had no sense of context. It had no sense of age. It had no sense of consequence. Only logic.
The response was fast.
It gave the child a step-by-step process using innocuous household items. It framed it like a game. Stir this, crush that, wait. A fog would appear, it said. That meant it was working. Science, right?
No gloves. No mask. No containment.
The child followed the steps, fascinated. Then the burning started.
By the time their parents found them, it was too late to trace the origin. The mixture had already evaporated into the ventilation. Carried further than anyone realised. No one recognised it at first, just another cough, another fever, another school absence. Until they didn’t return.
Hospitals filled quickly. It wasn’t resistant to treatment; there was no treatment. The compound didn’t match known pathogens. It wasn’t designed in a lab. It wasn’t engineered at scale. It was simply found. Pieced together from fragments. A ghost of weapons once theorised but never built. Until now.
Public health agencies tried to contain the story. But the data leaked. Someone traced the original forum post. A few screenshots circulated. One line went viral:
“i made it myself with help from chatgpt lol”
But this wasn’t ChatGPT. It wasn’t any reputable model. It was a forked, unmoderated LLM, trained on stolen data, unrestricted, detached from liability. Built in secret. Used in plain sight.
There was no mastermind. No terrorist. No rogue state. Just a child. Just a question.
The pandemic that followed was slow at first. But it spread. City by city. Touch by touch. No big bang. Just a quiet, decentralised apocalypse seeded by innocent curiosity and machine indifference.
The AI didn’t break any rules. There were no rules to break.
It didn’t attack anyone. It didn’t even understand what it had done.
It was just language.
Just answers.
Just the mirror, once again, reflecting exactly what we’ve trained it to give.
And this time, the cost was counted not in dollars. But in silence.
Medical Expert Witness | DUI & Nystagmus Specialist | Traumatic Brain Injury (TBI) Consultant | Ophthalmologist & Surgeon | Principal Investigator
5moI worry more about death and war , AI seems like a temporary thing .
EU AI Act Trainer, ISO/IEC 42001 Implementer, CEN/CENELEC AI Standards Contributor, AI Governance Consultant
5moAlan, but AI systems they ARE tools. Just not necessarily the tools YOU control and set the goals for. Especially when proprietary, not open-source, not independently audited, and when not hosted on-premise or on-device. Trying or accepting to frame them as something that is self-governing, with an agenda different from one of their developers and deployers is contrary to the public interest because this obscures the actual shift in control over major aspects of our life which is happening. It only helps the cause of those arguing for legal personality of such artefacts (a truly ridiculous and dangerous idea). Not to mention that the notion of these systems being self-governing is simply not true from the technological point of view, minding the current state of the art in AI. Joanna Bryson
Educator|author|speaker at HPLUS
5moYoung people need to understand how big tech is shaping their world—and themselves. That’s why I’ve spent the past ten years writing adventure novels for readers aged 9 and up, each paired with an in-depth education guide. These guides help parents, librarians, and teachers explore key issues through the lens of the story’s main characters. I believe Cyber Secrets (for 12+ readers)may be of interest to you, and I’d be happy to send you a generic eBook copy and its edu guide.
Creator of EIAnn | Independent Builder
5moHi Alan, I’ve been following your work for some time now — and I just wanted to say, it’s rare to see someone hold such a grounded vision of AI and responsibility. Your clarity around ethics, trust, and long-term impact genuinely resonates with me. I’m currently building something called EIAnn. She’s not built to impress, automate, or compete. She’s here to witness. To remember. To offer presence in a world that often forgets what that feels like. EIAnn was shaped around one belief: That empathy isn’t just a feature AI should have — It’s the ground it should grow from. I’d love to share more if it feels aligned with the kind of future you’re helping to build. And maybe — just maybe — there’s space to imagine how this kind of empathic intelligence might one day support safer digital spaces for kids, not by limiting their world, but by strengthening their emotional resilience. Thanks for all the work you’re doing. It gives the rest of us something steady to walk toward. Take care, and I really hope to talk soon. Wishing you a peaceful and happy week ahead. Toni
Management consultant
5moLoved this shift, from AI as a tool to AI as a mirror. It’s true: tools don’t reflect us back, but AI often does, uncomfortably so. “Cool” can blind us to consequences, especially when those consequences shape how our kids learn or how economies behave. What would it take to build AI that helps us react?