THE AI HYPE: Why the Industry Risks a Major Backlash - And how Serious AI-Developers Avoid Becoming “Guilt by Association”
Alright, this might not be as “optimistic” as my usual posts (e.g. https://datadisruption.ai). Some will probably call me conservative (“the grown up in the room”), others will claim I’m exaggerating. Others will feel targeted and want to attack. But I’m ready - been there, done that - someone has to do the job, so bring on the tomatoes. 😉
The claim to be a new breed of sheriff in town - without guns
Am I really the only one seeing this? Because what I’m witnessing right now is an accelerating trend in the industry that genuinely worries me. Right now, we’re seeing a wave of enthusiasts - often with backgrounds in business, communication, or sales, and with no real grasp of data or programming (sometimes not even an appropriate understanding of analytics) - who are eagerly experimenting with GPT prompts and playing around with agents in (over)simplified LCNC platforms like Make and Zapier.
And sure, in theory, this should be a good thing. Enthusiasm is always good, and it's not bad to allow the first moves to be a little wobbly. Which is especially true after years of long-winded discussions with the IT department or expensive consulting projects about some small use case. 't it gorgeious that we now, with all these easy LLMs and LCNC tools, can get more advanced data models and even actual agency into our own hands in a super-easy way?
And yes, eventually it cand and should turn out like that. But what I’m actually seeing now is something totally different. Instead, I see self-appointed experts flooding LinkedIn and other SoMe with their “AI solutions,” getting hundreds and even thousands of likes from other equally data-illiterate businesspeople. They then go on to gain validation and charge for hosting webinars and training sessions about how “simple” it all is and what “magic” they’re creating. There are two fundamental issues here
1) Everything they do is disconnected from the company’s real systems (ERP, CRM, CDP, DXP, etc.)
There’s no integration, no traceability, no real automation. It’s important to clarify that, even though it’s technically possible to build some deeper integrations in certain LCNC tools via APIs and code blocks, in practice, this almost never happens with this type of “AI-experts.” As soon as it gets more advanced than a simple connection, it’s suddenly not “no code” anymore - and that’s where most draw the line.
Above all, it’s vital to understand that “real automation” in AI means dynamic, adaptive, and often self-learning logic. This kind of system requires a robust data pipeline, traceability, and quality controls - and almost all of this is missing from the solutions currently trending on SoMe.
2) What they build is nothing more than simple, static IF/THEN/ELSE logic from the 20th century - and has nothing to do with real AI or even GenAI/ML.
There’s a complete lack of self-learning, adaptivity, and real (artificial) intelligence. Many of these solutions are, at best, no more advanced than old Excel macros – in best case basic RPA - yet they’re being sold as the next major technological revolution. Sure, you can use Zapier, for instance, to build flows that call the GPT API, but all actual ML/AI processing then happens on OpenAI’s side - the Zapier flow itself has zero intelligence and zero model control.
Many don’t understand the difference between triggering a model (“calling an API to get a response”) and building, fine-tuning, monitoring, and validating a model. Prompt engineering and agent-flows in today’s SoMe hype never means training your own models, managing a proper data pipeline, or establishing any kind of model governance - it’s just a “layer of quick connections” on top of someone else’s AI infrastructure.
So how do you avoid getting infected by this?
This is a recurring pattern every time a new, cool EmTech is launched. The first enthusiastic but less knowledgeable charlatans (some knowingly, others just blissfully unaware) make the most noise. When less EmTech-savvy leaders and organizations fall for the hype (“this looks not only cool, but simple!”), it almost always ends with a backlash:
“We tried AI - and it didn’t work.”
The result? The entire industry risks guilt by association. Even those actually building real AI suffer as the collective trust drops.
So, what do you do if you’re not one of the data-illiterate? You make sure not to fall into the same trap. That requires taking a stand - either you dodge and hope you avoid the fallout, or you step up and fight to position yourself as one of those building things the right way. I’ve personally taken the latter route before - and even if it’s high risk, it’s also high return. But for that, you need at least five things.
Clarify the gap between hype and real AI. You have to show why low-code/no-code flows can’t always deliver at depth. Of course, you should have a toolbox where you avoid overkill for things that don’t need it. But real, useful AI almost always means system integration, traceable data, robustness, and real business value - not just prompt-hacks where you throw together a bit of IVA with agents, operators, webhooks, and APIs just because it’s easy to do in Zapier, Make or n8n. Real enterprise AI always means versioning, retraining, data quality, and explainability. All this is to a certain degree missing in fast LCNC flows - and that’s why selling this as an “AI solution” for anything beyond play and PoC is dangerous in the long run.
Learn from history - and speak up early. This isn’t new. We saw it with IT and eCom at the end of the last millennium, with blockchain and VR at the dawn of this one. The first hype wave is always filled with a certain share of fluff, usually amplified by financial investments and media hype - and it’s the true professionals who have to clean up afterwards. That makes it all the more important to mark the difference early on. And sometimes, someone has to even be the street kid who points out that the emperor has no clothes.
Protect the concept of “AI” from being diluted. We must protect trust in real AI - otherwise it’ll become as tainted as “digital transformation” became ten years ago. We’re on the verge of AI actually doing wonders in almost every business, but in most of them, this will require real data science modeling and coding, e.g. in frameworks like LangChain, where you build pipelines, orchestration, context handling, and chain together different models to create real value. And that demands real coding skills (Python, JS, etc.) and connections to enterprise databases, vector indexes, and policy layers. And again, of course we should not overkill use cases that don’t need it – where sometimes “AI-light” actually is the right choice - but in most cases, real data science modeling and coding are required. And above all, the former should never pretend to be the latter. And we need to be on red alert when AI-light turns into AI-fake. Transparency, explainability, and governance needs to be key.
Build for real - and explain why it’s hard. There must be deep integration, control, data pipelines, and real ML/AI models behind most solutions. For all of us working in the real dev outside the “PoC swamp”, launching hundreds of workarounds without a coherent data strategy, architecture, and integration leads to spaghetti that takes years and millions, globally even billions, to fix. Real enterprise AI always relies on managing and maintaining data pipelines, model versioning, retraining, and secure integration with the organization’s entire system map - things completely absent in today’s fast SoMe hype.
Take the discussion - pedagogically, but uncompromisingly. It’s not enough to hope people will see the difference. We have to dare to take the debate, point out the risks (data leakage, GDPR, black box traps), and highlight proven cases where fluff solutions have failed. It’s worth reminding people that many quick “AI solutions” in LCNC or via external APIs overlook that sensitive data is being sent to third parties - often without any audit, traceability, or control. At the same time, we have to uplift the industry, invest in competence development, and build alliances with those who have true skills. Only then we’ll we get a healthy development where real AI experts, data scientists, and data engineers get the chance to show the value of sustainable AI - and avoid spending years cleaning up bad PoCs full of workarounds.
Closing the circle
I started by asking if I was alone in seeing this, but the reality is that there are several more international AI experts who are warning about the similar thing. From the business side we hear shoutouts from prominent leaders about the trend to not create something new that is sustainable from the ground up, but where naïve AI is often built as a layer on top of old systems. Technical voices point out that AI-generated code can quickly become difficult to understand and maintain, especially in no-code or low-code platforms. Researchers show how the hype leads to disappointment when the real limitations of the technology become clear.
This is thus by far not an isolated problem. But still, few are speaking up locally and within the industries. Now is therefor the time for more people to draw the line between hype and real results - otherwise, we risk AI losing trust before it’s had the chance to deliver for real. AI is nothing but amazing when done right, but it demands competence, honesty, and courage. And, why not, some documented data-literacy 😉
AI for sure has it’s true challenges, even some dystopic scenarios if we’ll not handle AI properly, but in the end there is no question - even if some happy less data-savvy enthusiasts are overhyping themselves, AI in itself is still underhyped: it has the potential to revolutionize not only every industry, but even the whole world (for better or worse).
But this will only be the case if we build it for real and dare to draw a line against solutions that are just hype and the emperor’s new clothes. We have a responsibility to keep the AI concept clean and trust high - otherwise the backlash will come, and even the best risk being “guilt by association.” Time to stand up for quality, transparency, and true innovation.
PS: For ongoing discussions about AI and its impact on stack, structure, security, strategy and society, join the world’s first real (>100 members) forum on AI-strategy here: https://www.linkedin.com/groups/10070347/
Rufus Lidman, Fil. Lic.
Lidman started his first company at 19 and has since founded or co-founded ten ventures. He now chairs Northern Europe's fastest-growing digital/data talent acquisition firm, recently awarded Gasell and "Recruitment Company of the Year" for its effective use of AI while preserving human connection. This momentum is now accelerating with the launch of the world’s first product within AI-First Hiring™. Previously, Lidman drove a pioneering AI-powered EdTech company in Singapore, reinventing learning for millions in emerging markets. He has worked as a digital strategist across four continents for 100+ companies, including Samsung, IKEA, Mercedes, Electrolux, and PwC. As an entrepreneur, he’s led ten ventures with 2–3 exits, won three Gasell awards, and launched apps with 15+ million downloads. He founded IAB, advised the WFA, is a tech influencer with 50,000 followers, a speaker with 300+ lectures, author of four books, and created the world’s largest digital strategy learning app used by 200,000 people in 165 countries. Lidman holds dual degrees in business and data statistics from Uppsala University, with further PhD studies and data science.
Senior Advisor på Digitalenta
1moTack för att du delar med dig, Rufus
Fractional CTO for VCs | AI Governance Architect (Ex-Salesforce/Deloitte) | APMA Founder
1moThe market’s flooded with AI-washed tools that look slick but deliver zero strategic lift. Real AI agents should drive integration, automation, and measurable outcomes, not just pretty dashboards.
I’ve been thinking about apps, how everyone felt they needed one just because it was an app. One app paid the salary for a junior dev for a year. In the end, what really mattered was being a search hit and having an icon in the App Store. AI is a different story. It fundamentally changes how we interact. Google already has had to rethink search. It affects all programming languages, all platforms, society, education, and this is just the beginning. The hype right now is about what people can actually grasp right now. Business people adopt first because they can, but tech, it, science need more time since it’s hard and current systems need to function. But when the real players start moving into ai its also real real value delivered.