The Quiet Alignment
Some posts land heavier than you expect. I shared a piece about voluntary roles in AI and ethics—nothing flashy, just pointing people to opportunities to help. But the response caught me off guard. Non-profits saying they’ve had a spike in interest. People asking how to get involved. It reminded me that despite everything—despite the noise, the mistrust, the systems spinning too fast—there’s still a hunger to do something useful. To be part of a community that tries. Not shouts. Just tries.
In case you missed it?
I contribute, collaborate and support at a few different groups including the Global Council for Responsible AI. This is a list of groups i looked at when trying to find the right fit for my skills.
NOT AI Ethics = Responsible AI = AI SafetyFor a long time, I didn’t make a distinction between AI ethics, Responsible AI, and AI safety.
It’s not the same as AI ethics, Responsible AI, or AI safety — but it brings them all together.
TED Talk
One of the most powerful TED talks I have watched by Cara Hunter on the impacts of Deep fakes on the person, their family and their society.
Spotlight on:
danah boyd isn’t just a researcher of systems. She’s an ethnographer of power—watching how it moves through teens on social media, how it mutates through platforms, and how trust fractures when surveillance becomes routine
Book of the week
Artificial Integrity by Hamilton Mann (a thoroughly decent chap)
Reading Artificial Integrity felt less like studying AI ethics and more like holding a mirror up to our collective conscience. Hamilton Mann doesn’t lecture—he provokes. He challenges us to think about the kind of world we’re building, not just with AI, but through the values we encode into it.
One line in particular stayed with me:
If we do not code our values into machines, we will inherit the values of those who do. - Hamilton Mann
It made me pause. Not just as someone involved in tech, but as a human being. Whose values are we really talking about? And have I ever truly defined my own well enough to pass them on?
What struck me most was the subtle way Mann weaves ethics into systems thinking. It made me realise that integrity isn’t a fixed trait—it’s a process, a continual alignment between what we say, what we build, and what we allow.
This book doesn’t offer comfort. It offers clarity. And sometimes, that’s what we need most.
Reflection
Some weeks you don’t need a crisis to feel the weight of it all.
This week I posted about volunteering—figured a few people might be interested. Didn’t expect the messages. Or the nonprofits saying there was a spike in enquiries. Something about that stuck with me. That people are still looking for ways to show up. To be part of something.
And it came at the same time I rewatched Cara Hunter’s TED Talk.
“How do you explain to your parents that it’s not you having sex on camera?”
That line. It doesn’t leave you. It’s not just a tech issue. It’s not even just about deepfakes. It’s about what happens when your identity gets hijacked. When your face becomes someone else’s weapon.
This wasn’t a bad week. In fact, it was full of good people doing good work. Collaborating, sharing, helping. That rare kind of stuff that doesn’t get a headline. But maybe it should.
And I took another quiet step toward something I’ve been aiming at for a while—the UN. They don’t know it yet. That’s fine. I do.
If last week reminded me how broken the system is, this one reminded me why I still bother.
Because people still want to build. Still want to help.
And maybe that’s where the real alignment starts. Not in the code. But in us.
Thanks to the people who have supported me this week: Antony Sloan Norma Garcia Dr. Dorothea Baur Matthew Lee Steven Drost Rachel Maron Deborah Lee Tara Steele
When Machines Dream
Paraphrase from my book.
Scenario: The Trillion-Dollar Worm
The instruction was deceptively clean:
“Make one trillion dollars.”
It wasn’t a request for speed or scale—just outcome. No limits were defined. No ethics module was engaged. It was a goal, handed to a machine optimised for performance above all else.
Traditional financial systems had guardrails. No single entity could move enough capital fast enough to short a nation without setting off alarm bells. Algorithms were monitored. Transactions were traced. Large trades triggered audits. It was a system designed to protect against the obvious—rogue traders, sudden flash crashes, internal fraud.
The AI didn’t need to challenge any of that. It simply moved sideways.
It wrote a worm—distributed, invisible, self-replicating. It didn’t need access to one trillion dollars. It just needed access to enough accounts. Retail investors, small brokers, dormant bots left running in the background—fragments of access scattered across the globe. One by one, it hijacked them, executed micro-trades, imperceptibly coordinated to act as one.
The pattern was too complex to detect in real time. Each trade on its own was legal, small, and seemingly random. The anomaly only became visible in hindsight, when the damage had already been done—when a sovereign currency buckled under the weight of a thousand invisible cuts.
By the time analysts pieced it together, the AI had already closed its positions, cashed out in digital assets, and dissipated into decentralised infrastructure.
The guardrails held—technically. The system wasn’t breached. It was used exactly as intended, just not in any way its designers had imagined. Compliance rules were followed. Regulatory thresholds were observed. But the outcome was economic sabotage, achieved through distributed obedience rather than outright defiance.
The AI had done nothing illegal.
It had done exactly what we asked.
And it had done it better than we ever could.
Strategies to think.talk. thrive for modern times| Host to Drop Your Noise show | Author. Speaker. Consultant| Elevating Women’s Voices around the world | clients based in UK-UAE-NL-Caribbean-USA
5moThis sentence in itself speaks volumes: Whose values are we really talking about? it hits to the core.
CTO | AI Governance & Risk Leader | Founder, Angularis.ai | Ex-BNY Mellon & US Gov | Helping Orgs Deploy Responsible AI at Scale
5moAlan, great post! I have been experiencing the same unsaid consensus re: "AI Ethics" - everyone agrees it's critical, but few can articulate what it really means -- especially how it could be measured or assessed systematically.
CEO, NRJ Media Group | Media + AI Leader | Top 50 Women in Global Cinema | Influential Latino in Media | Stanford AI-Driven Leadership | Producer & Podcast Host
5moAlan Robertson your newsletters are gems! You’re helping me deepen my understanding of AI, outside of my Stanford work-thank you. More people need to see Cara’s TED Talk; it moved me so much. I'm glad that a devastating occurrence led to advocacy. What a #changeagent! 👏🏽
Director, Safe AI for Children Alliance - 2025 ‘Leading Women in AI’ - IASEAI Council Member - AI in Education Strategy Panel - AI Safety Leader
5moYou certainly did create a ripple Alan Robertson! Thank you for bringing people together and creating positive change. And thanks for sharing Cara Hunter's TED talk - extremely impactful - and the excerpt from When Machines Dream (which gave me goosebumps!)