AI Is Like Fire - And We’re Just Learning to Build the Fireplace
Welcome to week 36 of my journey into a cybernetic life. This week I discovered a growing tension in the air — even Sam Altman recently said the AI bubble might burst soon. And he’s not wrong. We’re living in a time of massive expectations. Billions are flowing into AI startups, especially those that promise to disrupt entire industries. If your pitch includes phrases like “agentic software generation” or “infinite codebase management,” you’re likely to walk away with funding.
And yet, when you look at real productivity gains in enterprise settings — they’re... marginal at best. Sure, people write emails faster. Devs scaffold code more efficiently. But have our companies become radically more productive? Not yet. The enthusiasm is there — but so is the skepticism. Investors are starting to ask hard questions. And we’ve seen this story before — in the dot-com bubble, and later in crypto. When excitement outpaces real value, corrections are inevitable.
AI Is Like Fire – and We’re Just Learning to Build the Fireplace
What I observe:
The models are good enough. What we lack are the agentic processes and applications that make the power of AI predictable and controllable.
AI today reminds me of early fire in the stone age. We’ve seen the spark — and we know it’s powerful. But right now, it’s still dangerous. It hallucinates, bloats communications, contradicts itself. Sometimes you need another AI just to clean up the mess.
Just like fire needed centuries to move from basic heat to precision industry tools — AI needs process to become reliably useful.
And the numbers support this. According to a 2024 BCG report, only 19% of enterprises report measurable business impact from their current AI deployments — and most of that is concentrated in isolated, task-specific use cases like document summarization or customer service automation. Broad, transformative impact remains elusive.
Too Much Prompt, Not Enough Process
Right now, we’re using AI in a mainly unstructured way. We throw long prompts into a black box and hope something good comes out. We celebrate vibe coding and get surprised when results fall apart under complexity. But here’s the truth:
This reflects a broader issue: prompting is not yet a discipline. It lacks composability, traceability, and reviewability — the things that made software engineering scalable in the first place. As LLMs grow in capability, our ability to orchestrate them meaningfully becomes the next limiting factor.
To succeed here, we need modularity: modular prompts, modular context, modular review loops. In a recent paper from Stanford HAI, researchers found that task decomposition and structured prompting significantly improved LLM accuracy and alignment — reinforcing the need for engineered process rather than creative vibes.
My rule of thumb: Never type more than 10 lines into a prompt box. Start building reusable prompts and context blocks. Compose them. Chain them. Review the results with another prompt.
This is the difference between Vibecoding and Engineering with AI.
The Way Forward
We need to stop thinking of LLMs as magic black boxes and start treating them like APIs — deterministic systems that need well-defined inputs, constraints, and interfaces. We need to build scaffolding around them — not just to protect users from failure, but to empower experts to scale their intent.
There is a massive opportunity here, but it requires maturity:
We are still early in this transition. But if AI is to transform knowledge work, it will do so through structure, not spontaneity.
Conclusion
AI is the fire. But until we build the fireplace — a process that contains, shapes, and controls it — we won’t realize its true value.
The bubble may burst. But what will remain is a force we can finally wield — if we learn how to build with it.
And yes — even this newsletter was written using a modular prompt stack: tone, POV, structure, research, outline, and final-generation prompt. Nothing fancy. Just structured.
Until next week – stay structured. Christian
Great analogy 🧑🚒 — AI is powerful, but without the right process, context, and governance, it can quickly create more chaos than value. That’s exactly why governance is so central to digital innovation, and why Betty Blocks has been named a Leader in Low-Code Application Development: giving organizations the “fireplace” they need to harness speed and creativity safely, at scale. 🧯
Christian Moser - let's get rid of the chaos and learn how to build the fireplace then at our Family Day - can't wait to have you on our stage next week 😀
Trailblazing AI-Powered Business Transformation | Accelerating AI Adoption Using Systems Thinking | Driving AI-First Engineering Leadership & Teams | Implementing Customer-Driven AI Solutions That Deliver Real Impact
3wChristian Moser That’s a great analogy. AI is definitely in full hype mode right now, but if we focus on building the “fireplace” (i.e. the solid structure and guardrails around it), the eventual bubble burst will be far more contained and manageable. It’s about creating the foundation now so the sparks don’t burn out of control later.
Financial Services strategist, Technology enabler, Ecosystems builder and Board Advisory Certified Chair™
3wAlways a pleasure following your cybernetic journey Christian. Jensen Huang recently said that every human being needs to be on the AI journey so that they remain relevant as society evolves with this generational tech. So I am cheating by following yours. A different perspective I am starting to realise, and one that aligns to your messages, is how we are only looking at the effects of AI outcomes in a similar way to how we see automation of "physical" activities, e.g. manual processes - <time saved> x <$ cost of human labour or machinery> = <monetary benefit>. However, the automation of "cognitive" activities, may need different and perhaps new frameworks for quantification, e.g. what is the value of a good decision? Components will include, time to make the decision, costs of resources to make the decision, the accumulated wisdom of the decision maker to address the unknown elements of an analysis, the language used in communicating decisions, responding to questions about decisions and ultimately the impact of decisions. Clearly we do not have a framework for evaluating the quality of automation of cognitive activities, or are not employing them in the early assessment of the impact of AI. THINGS WILL CHANGE
Leading voice for AI ➕ Humans in Switzerland | Executive Consultant for Insurance & FinTech | Keynote Speaker | Author | Chief of Digital Experience & Partner at Zühlke
3wRomano Roth, you just published your book "The Cybernetic Enterprise". What is your perspective on that? https://leanpub.com/CyberneticEnterprise