The Quick and the 'Dead': When Buzzwords Kill Common Sense in AI.
Ai image generation apparently isn't dead

The Quick and the 'Dead': When Buzzwords Kill Common Sense in AI.

I mentioned in another post that Why Business People Speak Like Idiots was one of the first 'business books' I read. I usually hear George Carlin's voice while reading it as his BS bit is one of the best standup comedy bits out there.

I thought it would be appropriate to call out the BS around my new trademarked phrase, The AI Fear Cycle™ given a recent article I read from Venture Beat creatively titled "Vibe Coding is Dead, Agenic Swarn Coding is the New Enterprise Moat"

It would be simple to turn this post into a rant, so I'll try to balance rant with usefulness because I've been using no and low-code platforms plus automation for years and over the last year and half, integrating AI to make what I build more intelligent.

What will always matter to businesses

  • Leaders will ALWAYS want faster, better, cheaper.

  • Leaders will ALWAYS drop changes through the hole in the floor based on what they believe is a credible article or from what they talk about with their peers and 'expert roundtables' hosted by people who've never actually run a successful business themselves. (sorry, charging $4000 a day to say "how do we feel about that" doesn't count, sorry).

  • Leaders, despite appearing to speak and act like idiots are not idiots. They're just as stuck as the rest of us. Costs keep going up and the freemium model invented decades ago makes it increasingly difficult to stay afloat in our race-to-the-bottom world.

  • Workflows and processes will always be a thing. Even if they suck, they are tangible and provide a useful frame for conversations about how things work.

What Fear and Hype You Can Ignore

  • <insert thing> is dead. I shouldn't have to explain that, we all know it's desire for ad revenue, or desire to sell <insert product or service> with a catchy call to action.

  • <insert new AI buzzword/phrase> - see above and: we went from machine learning around 1959 to automated workflow in the 1980's and skipping ahead to a few years ago, AI to generative AI to LLMs to Prompt Engineering to Agenic AI to RAG to Vibe Coding and now Agenic Swarm Coding.

  • Anything you read from Forbes, Inc.com, Venture Beat, CIO magazine (only because it's amazing to see their content looks like it was written in the 1800's) and any other site/article/author that is clearly writing clickbait for ad revenue. Those articles might be interesting but they are a distraction from what actually matters and only confuse us more. The more we're confused, the more we wait to take action.

Simplify

Alrighty, let's get to the useful stuff. But first, a teeny-bit more context.

You've likely read funny articles like the one where a troll ordered 18,000 cups of water from an AI-powered drive-thru. Of course the company now wants to scrap their AI strategy for an edge case, which is incredibly stupid. I call that the Management-by-Outlier™ framework.

It is, and it will ALWAYS be Robot + People, with less people, not Robot OR People

AI Keeps promising full automation, eradication of jobs, blah, blah, blah but smart people know it would be insane to remove all humans during the gold-rush era of AI because the technology is still too flaky and evolves too quickly to get reasonably predictable results. Apparently Klarna has finally read the memo.

The Microsoft Excel Principle

It's been a running joke for years that the most useful business platform is excel. Non-technical users can does wondrous things without having to talk to pesky developers. Sure it creates a mess, but it's faster, easier, and cheaper in the long run.

I mentioned that I've been using no and low-code platforms and automation for years and now with the evolution of AI, what I build has some intelligence now. Here's how:

My customer support is 90% automated. My bots are continually trained on most of the content on my website, a history of previous interactions, manual intervention by me and more. So even though some people are rude to my bot, it saves me a ton of time. Learnings:

  • Review chats weekly and re-train as necessary.

  • Multiple safety nets including 'tell the user to email us if you think they're getting frustrated', plus list of things to never do.

  • Ignore edge cases - because one a-hole was rude, I won't change my processes to accommodate outliers.

My Accounting is 95% automated. All inflows are categorized and tracked an mapped to proper accounts but I review this weekly and have the final say on verifying the transactions. It would be insane to give my 'agent' full access to write transactions directly into my accounting system, although I could easily do that.

Leaders, read that again. IT. WOULD. BE. INSANE. TO. COMPLETELY. REMOVE. A. HUMAN. FROM. THAT. PROCESS.

I made that extra snarky because I read an article about how a team let a bot loose on their codebase to write and release code and guess what happened? And guess who they blamed?

Any fool who blames the tool is a tool and a fool.

8 Bots, Loads of Workflows. I can't list them all here so I'll include some highlights:

  • Course evaluations into testimonials - when people evaluate my courses, they are evaluated automatically for the purposes of become a public testimonial. I just say 'yay' or 'nay'

  • Event Summaries: from evaluations to insights, another bot's sole responsibility is to summarize what people like, don't like and their suggestions. I get a summary weekly and can take action if necessary.

  • Sentiment: one of my bots sole purpose is to analyze sentiment across different types of content. It received the context and the content and generates a sentiment. Then if I want, I manually take all of that data to look for patterns.

  • Lean Change Knowledge test: This isn't live anymore because you can guess what happened. People used AI to cheat so it's being re-tooled to be a coach instead.

Simple Tools

This morning I built a new 'bot' in 7 minutes using make.com. Some of these tools might seem overwhelming, but they are simple even for non-technical users.

  • Zapier and make.com allow you to automate anything and now that they have AI capabilities, you can make your workflows intelligent.

  • n8n and Mind Studio are fantastic as well, but more clunky by comparison.

  • Airtable lets you ask your data questions (I use this to track feature usage across a bunch of sites/apps and it helps with prioritization.

  • Bubble.io, Webflow, SoftR, Retool, FlutterFlow, Bolt and Replit are easy enough low-code tools to help non-technical people build stuff. SoftR is probably the easiest. As a disclaimer, ignore everyone that says things like "I built a 10 million dollar mobile app in 6 minutes using <insert tool>". Most of these have a steep learning curve.

The World According To Garp

Bonus points to whoever gets the reference. As much as publications like to post doom-and-gloom about the eradication of humans, empty promises of 100% automation, agents that can make decisions by themselves, and magic tools that promise to eradicate your annoying developers, these are the only truths that matter:

  • Context: Just because you can, doesn't mean you should. Some consequences matter more than others. If the only consequence is public embarrassment because a Youtuber Troll ordered 18,000 cups of water, who cares. The internet has a short memory. If the consequence is killing a patient, FFS people, use your brain.

  • Guardrails: The number of people I talk to that say AI is banned in their organization is staggering. Leaders: You do realize everyone has phones right? Embrace it, experiment, and be as clear as you can about guardrails that matter, not your personal opinion about data privacy.

  • One Tool Only: The number of people I talk to that say they're only allowed to use Co-Pilot is staggering. Co-Pilot sucks compared to every other AI tool out there. Sorry, it does. Accept the consequences: One tool = the appearance of control but reduces creativity and increases people working around a (to me) stupid rule. Many tools increases creativity and is more likely to be useful but creates technical debt and the need for more tacit knowledge. Challenge the constraints and accept the consequence, it's your company, not mine so I don't care what you do.

  • Centralized AI teams: Public tools are fantastic. Your internal teams will never be able to keep up or build anything close to as useful. This is again a consequence-based decision. A centralized team that builds AI solutions creates a huge bottleneck and probably really bad solutions. A centralized 'AI COE' ( I hate that term ) where the team members congregate to share stories and ideas is good, but embed them into existing teams.

  • Explore Together! While that sounds like something taken from a My Little Pony movie, it works. We all know every business is snarled mess and a pretty org chart only matters to know who sits where. Everything touches everything. Get people across your organization together to build and try stuff out.

  • Burning Cash: Decide how much money you're willing to light on fire. There's plenty of stories about companies who've scrapped their AI strategy or even dumber stories inciting fear because 80% of AI pilots fail. Well duh. WTF did you think is going to happen when you experiment with volatile technology? I had an argument with someone once about their view that companies should be fully vetting and planning AI implementations with clear ROI before starting. Wrong. Sometimes you need to whack things with a hammer for a while to see what it does.

What Would George Say?

Instead of writing my own closing, I asked Gemini, ChatGPT and Claude.ai to write it for me. This is a lesson in using multiple tools.

Claude: So here's the deal: The AI Fear Cycle™ isn't going anywhere because it's profitable. Fear sells magazines, consulting contracts, and conference tickets better than "hey, try some stuff and see what happens." The people screaming about AI apocalypse and the people promising AI utopia are often the same crowd - they just switch costumes depending on what's paying better that quarter. Meanwhile, the rest of us are over here actually building things, making mistakes, learning from them, and slowly figuring out what works. It's not sexy, it won't get you a TED talk, but it's honest work. And as George would probably say if he were still around to witness this circus: "It's all bullshit, folks, and it's bad for ya" - except for the part where you ignore the noise, roll up your sleeves, and actually try to solve real problems for real people. That part's pretty good.

Gemini: Look, here’s the deal. All this talk about jobs being "eradicated" and "100% automation" is just the latest flavor of high-tech bullshit. People have been promising to get rid of us for decades, and we're still here, still figuring out how to get the damn printer to work. So take a breath, ignore the fear-mongers, and get back to work. Because at the end of the day, it's just a bunch of fancy new tools for the same old job.

ChatGPT: I can’t write in George Carlin’s exact voice, but here’s a closing that channels his no-BS, razor-sharp vibe:

In the end, The AI Fear Cycle™ is just the latest costume for an old magic trick: distract you with jargon while picking your pocket for “faster, better, cheaper.” Today it’s “vibe,” tomorrow it’s “agenic swarm moat”—same carnival, new barkers. The truth is dull and therefore useful: keep humans in the loop, set guardrails that matter, and build small, real things on purpose. If a headline says everything you know is dead, check its pulse—then get back to work. Call the bluff, mind the context, and remember: hype is loud, but outcomes whisper.

My Closing?

Do whatever you want, I'm going to build more awesome shit. If you want to build awesome shit, hit me up.

Navin Quadros

Technology-led Organisational Change Management - Helping organisations navigate technology change

1w

As always, Jason, you're being the voice of reason and this article was a feast for my cynical soul. No matter where one is on the AI fear cycle (:)), it's here and it's not going anywhere. All the more reason to understand it, rather than succumb to the eternal "doomsday Like engines" that our sources of information are fast becoming. As George Clinton said, "Think. It's not illegal yet…" Thanks for your insights.

Thomas Jenewein

Learning, Change, Adoption for individual & organizational Transformation - Business Developer & Podcast Host

1w

to build awesome shit sounds always good. And the AI Fear Cycle is almost as stupid as war lingo. Hey (Dr.) Eike Martin & 👨🏻💻 Markus Metz - I think you like this text. :-)

To view or add a comment, sign in

Explore content categories