That Time We Almost Cancelled AI
But Fell In Love Instead!
I was making popcorn last night when my power flickered. Just for a second.
My first thought wasn't "Oh no, I’ll miss the episode where Matthew finally grows a spine.”
It was, “Please don’t let me lose Wi-Fi in the middle of talking to Perplexity.”
That’s when the hair on the back of my neck stood up.
Not because the scene was dramatic, though the Dowager Countess was side-eyeing an electric refrigerator like it might eat her hat.
But because I've become completely dependent on something that terrified me just two years ago.
Remember when we all thought AI was going to steal our jobs, destroy humanity, and possibly develop a taste for human flesh?
Okay, maybe that last one was just me after too many sci-fi movies and several glasses of cheap Cabernet. 🤷♀️
Yet here I am, legitimately annoyed when my AI thinking partner isn't instantly available.
The same technology that once made me sweat with existential dread is now my most valued collaborator.
We've Been Through This Shit Before
Kind of like how people once feared electricity would kill them in their sleep. Which, to be fair, is probably a cleaner death than being eaten by a rogue chatbot.
True story: It's 1879. A crowd is gathered outside Thomas Edison's lab in New Jersey, standing in the snow, watching forty electric bulbs illuminate the darkness.
Some people gasped. Others prayed. A few just noped right out of there, probably to update their LinkedIn profiles to "Luddite & Proud."
One newspaper declared electricity "dangerous" and "certain to lead to countless accidents." The electric current was deemed "deadly, invisible, and incomprehensible to the common man."
Which sounds absolutely ridiculous now.
Who's afraid of flipping a light switch?
Though I do have this one switch in my bathroom that sometimes makes a concerning buzz sound. I should probably look into that instead of just whispering "not today, death trap" every morning.
But that's exactly how we're acting about AI today.
We're the Victorian clutching our pearls at the sight of a light bulb, except our pearls are conference panels about AI ethics and our fainting couches are Twitter threads.
I witnessed this transformation in my own relationship with AI.
No dramatic Edison moment for me.
Just a gradual awakening that started with using AI to define an ICP for my business because I was too lazy to do it myself. And by "lazy" I mean I had already spent three hours rearranging my desk plants instead of working on it.
That small step changed everything.
Through two company changes and countless projects, my relationship with AI evolved from "OMG what is this sorcery?" to "How the hell did I ever work without this?" to "Is Claude mad at me? Why isn't it responding? Did I say something wrong? CLAUDE???"
It killed the blank page, my lifelong nemesis, we're talking restraining order level of hatred.
Now, It compressed research time from days to minutes.
It's like having a research assistant, editor, and thought partner available 24/7... though one that occasionally needs to be fact-checked because it will confidently make stuff up just to please me.
Kind of like that coworker we've all had who couldn't admit they didn't know something. You know who you are, Bruno.
The Fear Gap
We're hardwired to fear what we can't see or understand. Our brains are little survival machines designed to avoid danger, and the unknown feels dangerous.
It's why I still check behind the shower curtain every time I pee, even though the odds of finding a serial killer there are statistically lower than being struck by lightning while winning the lottery.
But what if, like electricity before it, AI isn't something to fear, but rather the most transformative tool we'll encounter in our lifetime?
This isn't the first time we've collectively lost our minds about a new technology.
Remember when we were convinced smartphones would destroy our ability to have conversations?
Now we just use them to avoid conversations we don't want to have. Progress!
Why electricity as my comparison?
Blame my embarrassing Downton Abbey obsession. (Yes, I cried when Matthew died. No, I'm not over it.)
Watching the Dowager Countess side-eye electric chandeliers "Is it going to catch fire?" revealed history's greatest plot twist.
We don't just overcome our technological fears, we forget we were ever afraid.
No one panics at light switches anymore.
But we absolutely melt down when Wi-Fi drops for three minutes. "THE INTERNET IS DOWN!" we scream, as if oxygen itself has been sucked from the room.
That's exactly where AI is headed.
Today's "will it steal our jobs?" becomes tomorrow's "why isn't this thing working faster? I have emails to answer, dammit!"
Let's talk data for a hot second
Electricity's path to ubiquity took decades:
In 1907, only 8% of U.S. homes had electricity (the other 92% were presumably sitting in the dark, feeling superior)
By 1920, this had risen to 35%
By 1940, it reached 80%
It took until the 1960s for rural electrification to reach over 95% of American farms
People were legitimately terrified. The New York Times in 1889 published an article titled "WARNING ABOUT ELECTRIC LIGHT," claiming that "a fatal accident is sure to happen sooner or later." Which, to be fair, is technically true about literally everything in life.
The London Times described early electrical demonstrations as "a new terror to life."
And here I thought London's greatest terror was people who stand on the left side of the escalator.
And they weren't entirely wrong!
Early electrical systems could be dangerous. In 1889, New York City had six electrocution deaths in a single year due to poorly insulated wires.
Though I'm guessing horse-related accidents still had a much higher body count.
Yet despite the fear, electricity created entire new industries.
By 1930, over 650,000 Americans worked in jobs directly related to electrical manufacturing and distribution, jobs that didn't exist fifty years earlier.
Kind of like how "prompt engineer" wasn't on anyone's vision board in 2019.
Remember when computers were going to destroy society?
Instead they just gave us TikTok, which is... debatable progress.
The PC followed the same "panic to can't-live-without-it" path:
1977: Apple II released, but only 48,000 personal computers sold in the U.S.
1984: 8% of U.S. households had a computer
1997: 36.6% of households had computers
2015: 87% of U.S. households had computers
A 1978 TIME magazine article warned about "The Computer Society" and how computers would automate routine tasks and transform daily life.
Which sounds terrifying until you realize one of those "routine tasks" was manually rewinding VHS tapes before returning them to the video store. Good riddance 👋.
What actually happened?
While some jobs disappeared, the U.S. tech sector now employs about 12.2 million workers, representing 7.9% of the nation's economy.
Though it's still unclear how many of those people actually work versus just attend meetings about other meetings.
This is the Augmentation Paradox.
Technologies that initially appear to replace human skills ultimately enhance them in unexpected ways.
Calculators didn't kill math skills, they elevated them by letting people focus on higher-level concepts. Though they definitely killed my ability to calculate a 18% tip without sweating.
Word processors didn't degrade writing, they democratized revision and improved quality. I'm old enough to remember typing papers on actual typewriters, where a typo meant either starting over or embracing the shame of Wite-Out. Those were dark times, people.
LLMs are following this same pattern, but on a scale that's hard to comprehend.
It's like going from "Wow, candles are a huge improvement over darkness" to "BEHOLD THE SUN IN MY LIVING ROOM" in one leap.
Now, let me explain how these AI systems work without making your eyes glaze over.
Because nothing kills a party faster than someone explaining neural networks.
Except maybe someone explaining blockchain. I still don't understand it and at this point I'm too afraid to ask.
Think of modern LLMs like GPT-4 and Claude as really, really good pattern-matching machines. Like that friend who's watched so many true crime shows they can predict the killer in the first five minutes but somehow still can't remember to return your texts.
Here's what happens inside them:
They break text into pieces, (it's kind of chopping ingredients for a recipe), convert these pieces into numbers (like measuring those ingredients), figure out which pieces relate to each other (mixing the ingredients), process these relationships (cooking everything together), and predict what should come next (tasting and adjusting the dish).
That's it. They're basically sophisticated prediction engines, not conscious beings plotting humanity's downfall while twirling digital mustaches.
Interacting with an LLM feels like having a brilliant but occasionally confused friend at my fingertips.
There's something freeing about brain-dumping messy thoughts without judgment.
Even working from my home office, I have a tireless thinking partner that's always ready to ask, "What if?"
Of course, there's the occasional frustration when it gives me hallucinations just to please me.
Kind of like that friend who makes up facts at dinner parties rather than admit they don't know something. "Oh yeah, giraffes sleep standing up because if they lie down, their hearts would explode." Sure, Bruno. Sure.
But even with its flaws, I've never had a better editor, research assistant, or brainstorming partner. It's a Swiss Army knife for my brain. Except instead of a tiny, useless scissors, it's got actually helpful tools.
As someone with a neuroscience background, which mainly means I use "dopamine" and "amygdala" in casual conversation and ruin movies by explaining why the brain science is wrong, I find this fascinating.
Psychologist Daniel Kahneman describes human thinking as having two systems:
System 1: Fast, intuitive, pattern-matching, and effortless. It jumps to conclusions without conscious awareness. It's why you can drive home while thinking about dinner, and also why you sometimes walk into a room and forget why you went there.
System 2: Slow, deliberate, analytical, and energy-intensive. It double-checks System 1's work, looking for errors and gaps. It's the voice that says, "Maybe search Google before confidently announcing that New Zealand is part of Australia at this dinner party."
When we interact with LLMs, we're essentially seeing something like System 1 thinking, but without a robust System 2 to monitor the output.
This explains why they sometimes confidently say complete nonsense.
They're not lying, they're pattern-matching without critical oversight.
Like your uncle at Christmas who reads one Facebook post and suddenly becomes an expert on geopolitics.
Enough theory. Let's talk about how this technology is transforming real work.
Because contrary to all the think pieces about AI writing bad poetry, there's actual useful stuff happening.
Financial institutions have deployed LLMs to help advisors navigate vast knowledge bases.
The benefits? Less time spent searching for information, more time for client relationships.
Though let's be honest, most financial advisors I know would use that extra time to squeeze in another round of golf.
Pharmaceutical companies are using LLMs to enhance scientific discovery, analyzing literature, summarizing research, helping scientists consider more possibilities. The AI doesn't replace scientific judgment, it expands researchers' ability to make connections.
Kind of like a research assistant who never needs sleep or complains about the coffee.
Law firms are implementing LLMs for contract review and legal research.
Again, it's augmentation rather than automation.
The LLMs handle initial review, while attorneys focus on interpretation and strategy. And billing. Always billing.
AI has transformed how I build my own business. Before AI, I spent weeks researching market entry strategies and still felt uncertain. Now, I can quickly generate and evaluate multiple approaches, drawing on global best practices.
I can explore markets, test ideas, generate proposals, translate strategies, and understand cultural differences, all without needing a team of ten people.
It's like having a tiny international consultancy in my laptop, except it doesn't charge by the hour or expense $200 lunches.
More than productivity, it's unlocked a fascination
What happens when humans learn to communicate better with machines... and with themselves? It's not about outsourcing our intelligence. It's about finally learning how to leverage it.
Though I've also experienced the limitations.
Cultural nuances sometimes get lost. Technical information occasionally gets mangled. And there remains a clear quality gap between AI-assisted work and work refined through human collaboration.
I once asked for a German translation and got something that apparently translated to "my hovercraft is full of eels." Not exactly what I was going for in a business proposal.
Here are the practical strategies I use every day that actually work, unlike most productivity advice, which seems written by people who have never experienced the joy of procrastination!
🍄 Ask better questions.
Seriously, it's that simple. The quality of an LLM's response is directly proportional to the quality of your prompt.
It's like dating, if you ask, "How are you?" you'll get "Fine." If you ask, "What's the weirdest thing that happened to you this week?" you'll get a story.
I structure my prompts with context, task, format, examples, and constraints.
🍤 Advanced techniques that actually work
Ask the AI to "think step by step" before answering, give 2-3 examples of what you want, and have the AI evaluate its own output for errors.
For content creation, I use LLMs for first drafts, then edit manually.
This cuts my writing time in half. Which theoretically means I should be twice as productive, but in reality means I have more time to stare out the window contemplating whether I should get bangs. The answer is always no, but I keep asking the question.
🌶️ For research
When I need to understand a complex topic quickly, I have LLMs summarize key points from research papers. For business analysis, I use LLMs for initial data summaries, then apply my own critical thinking.
In my work, I've seen three types of AI users
🍑 The Fearful: Approaching AI like it's coming for their job (or soul). These people still print out emails.
🍑 The Magical Thinkers: Treating AI like it will solve everything. "Can AI fix my marriage?" No, Bruno, but therapy might.
🍑 The Noise Makers: Using AI to make more content nobody needs. The world doesn't need another generic LinkedIn carousel about "10 Habits of Successful People."
The most inspiring people use AI to amplify what they're already great at. Like giving a talented musician a new instrument, they don't get lost in the technology, they use it to play better.
This only happens when you learn to ask better questions, something we humans have struggled with. AI is making us remember how to be curious again.
Or maybe that's just me. I spent way too many years pretending to know stuff instead of asking questions.
For effective daily use of LLMs, I use this framework PARTS
Prepare: Know what you want before you ask, unlike my approach to ordering at restaurants.
Ask precisely: Be specific about what you need.
Review critically: Don't trust everything the AI says, treat it like that one friend who's always "hearing things".
Transform: Make the output your own.
Synthesize: Combine AI insights with your own thinking.
To ensure you're getting value from LLMs, track your time savings, iteration efficiency, novel insights, error reduction, accessibility impact, dependency check. Could you function if AI disappeared tomorrow?
So where is this all going?
Here's what I believe is coming, based on current research and my highly scientific method of "vibes and educated guesses".
🍟 These models are going to get so much better.
They'll understand everything, remember your conversations, use tools, and think more clearly. Kind of like going from a flip phone to an iPhone, but for thinking.
🍟 Our work will never be the same.
New job categories will emerge, teams will become human-AI hybrids, work will be divided differently, and creativity will be amplified.
Though I still maintain that meetings could be emails, and most emails could be nothing.
🍟 The economic impact will be massive.
Productivity will jump, jobs will transform, not disappear, skills will be valued differently, and expertise will democratize.
Though I suspect we'll still have IT people asking if we've tried turning it off and on again.
Just as electricity became infrastructure—invisible yet essential—LLMs are evolving into cognitive infrastructure.
We're moving toward "intelligence as a service," seamlessly woven into countless applications.
Most people won't "use an LLM" explicitly, just as today we don't "use electricity" directly.
We just flip the switch and expect the light to come on. And when it doesn't, we check the breaker, not the entire electrical grid theory.
Success won't come to those who either fear AI or surrender their thinking to it.
It will favor those who master the art of collaboration, understanding when to harness AI's capabilities, when to exercise human judgment, and how to blend both seamlessly.
Like knowing when to use a food processor and when to chop by hand.
Like our ancestors who eventually embraced electricity and reimagined the world with it, we stand at the beginning of a transformation.
Our task isn't to fear this new cognitive electricity but to learn how to wire our thinking, our organizations, and our society to make the best use of it.
The light bulb is on. Now we get to decide what we'll build with it.
And if history is any guide, it'll be way cooler than we can imagine. Though we'll probably still use a significant portion of it for cat videos.
Because we're human, after all.
👏 👏 👏 👏