Keeping the Internet Real in the Age of AI Slop and the ‘Dead Web’
One quiet day in the year 2024, millions opened Facebook to find Jesus Christ—lovingly re-imagined as a crustacean—floating through their timelines. The image was AI-generated nonsense, yet it racked up more “Amen” reactions than most real charities can dream of. Comment threads? Also thick with bots. Thus, the meme called “Shrimp Jesus” became the poster child for what the tech crowd now calls AI slop—content so cheap and plentiful it feels like digital fast food.
Welcome back to Gen AI Simplified, where we untangle today’s tech weirdness before your morning tea gets cold. This edition dives into two intertwined ideas:
AI Slop – the rising tide of low-effort, high-volume synthetic media.
The Dead Internet Theory – the creeping fear we’re chatting more with algorithms than with people.
Let's dive.
Slop 101: From Spam to Shrimp
The nickname “slop” first caught fire in mid-2024, when developer Simon Willison quipped that junky AI output is to modern feeds what spam was to early email (Simon Willison’s Weblog). The recipe is simple:
Point a generative model at yesterday’s viral post.
Crank out a hundred near-copies.
Firehose them into Facebook groups, Pinterest boards, or TikTok hashtags.
Harvest ad impressions, affiliate clicks, or gullible victims.
Repeat until the human race screams.
Why it works:
Zero marginal cost – Models like Midjourney and GPT-4 can spin off new material in seconds.
Algorithmic rewards – Platforms still favour engagement over substance; slop farms exploit that hunger.
Audience fatigue – The sheer volume numbs users, making it harder to spot weirdness.
Real-world side effects are everywhere. Spotify’s Wrapped 2024 replaced beloved (human-curated) trivia with AI-invented genres like “Pink Pilates Princess”—earning a flood of TikTok roast videos (Forbes). Coca-Cola’s AI Christmas ads? Critics called them “soulless” and “uncanny,” proving holiday magic still needs humans.
“Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information?”—T. S. Eliot
I am 100% sure Eliot never scrolled Instagram at 2 a.m., but he’d sympathise.
“Most of the Internet Is Fake”: A Theory Gets a Second Wind
The Dead Internet Theory (DIT) surfaced on fringe forums in 2021. It's a bold claim: a majority of online activity is now bots talking to bots. Back then, it smelled like conspiracy. Fast-forward to 2024:
According to Imperva's annual Bad Bot Report, Bots cover 49.6% of all web traffic (Imperva).
In 2025, researchers at the University of Zurich conducted a Reddit experiment. They unleashed stealth AI commenters, and surprisingly, these stealth AI commenters persuaded humans 3-6× better than real human users. More surprisingly, nobody noticed the ruse (Tech Startups)!
Meta’s roadmap includes “AI personas” that will hold accounts just like yours (Forbes).
DIT’s darkest version - shadowy masterminds puppeteering everything - still overreaches, but the data proves a softer point: even without a conspiracy, automation has become the majority shareholder of digital attention.
Platforms vs. the Slop Wave
Facebook / Instagram: The platform is filled with engagement-bait images, veterans asking for likes, babies flying jets, all for the ad dollars. Then there are slop farmers targeting sympathetic commenters for scams or funnelling them into extremist groups. Meta has promised tougher detection, even as it rolls out new generative tools that… ironically, generate more slop!
TikTok: You can learn about the AI-driven virality from this incident. A fake website announced a Dublin Halloween Parade. TikTok hype sent thousands into Dublin’s streets - to watch ghosts, apparently (The Independent). No parade!
Google Search: In March 2024, Google admitted a surge of AI-generated SEO spam and pushed algorithm tweaks to demote “scaled content abuse” (blog.google). Meanwhile, its own “AI Overviews” summarise the web, with occasional errors—illustrating how hard it is to separate signal from slop.
Amazon KDP: After waves of auto-written e-books, Amazon capped self-publishers at three titles per day to staunch the flood (Ars Technica).
Pinterest: Creative pros complained that AI “inspo” made wedding hairstyles and dresses physically impossible. Pinterest now slaps “AI-modified” badges on images and lets users filter them out (The Verge).
In my eyes, it is pure hypocrisy! These giants go on releasing easy-to-use generative AI tools, even reward slop through their monetisation and ad payouts, while simultaneously rolling bot detectors, 'Made with AI' labels, and tighter ranking algorithms - to keep the same “AI slop” from overwhelming users.
Cost of Chaos
However, beyond filling the internet with junk, this slop has some hefty costs, not just in terms of money but also our emotions and psychology. People lose trust, creators feel cheated, and fear of running into fakes keeps us on our guard, stealing our chance of forging real connections with real people.
When every photo, voice note, or DM might be synthetic, people second-guess everything, slicing into news credibility and straining social bonds. ( I remember a LinkedIn thread - a few days back - where people were shaming someone for using AI to draft their LinkedIn Post - the supposed giveaway was the use of 'em-dash' - it felt like the old witch hunting to me! Aha, make it AI-itch hunting!)
Then there is the case of actual creators, for example, Rachel Farnsworth, whose hard-won traffic plunged after AI copy-paste sites muscled her recipes off the front page. Honestly, creating good content, even while using AI, is not magic - does not happen in a woosh. It involves multiple iterations, checking, rechecking, and adding edits, more like cutting rough stone into a diamond than pressing a button. Yet algorithms still punish anyone who doesn’t publish daily; miss a beat, and superb posts can vanish beneath an avalanche of automated look-alikes.
Finally, there’s the psychological toll; the fear of meeting “fake humans” can amplify loneliness. - If everyone might be a bot, why bother reaching out?
From Band-Aids to Roadmaps: How Do We Cure the AI Overload?
So, where do we go from here? Do we simply shrug and make peace with the slop—an ambient hum of synthetic chatter, the way city folk learn to ignore traffic? Or do we build a smarter yardstick, one that prioritises quality over quantity and refuses to confuse sheer volume with value? Should platforms start drawing a bright red line between AI-made and human-made, or does that badge risk becoming yet another filter we learn to scroll past? Maybe the real frontier isn’t who wrote it but whether it’s true: verifiable facts, transparent sources, accountable authorship—no matter the carbon or silicon behind the keyboard. Or, daring thought, do we mix it all together, let algorithms and people co-create, and judge the final cut on insight, honesty, and usefulness alone? The choice is ours—and every click, share, or swipe is a vote cast for the internet we’ll inherit tomorrow.
If today’s read made you think twice before sharing the next surreal shrimp miracle, do three things:
Forward this newsletter to one colleague/family member who still believes every viral image.
Reply and tell me your favourite (or most dreaded) example of AI slop - screenshots welcome.
Vote with clicks: next time you see thoughtful, human-made work, like it, share it, maybe even pay for it. Your attention is the algorithm’s fuel - burn it wisely. 😊
As Oscar Wilde might say if he were here, “Be yourself; everyone else is probably a bot.” See you in the next issue.
Systems & Automation Engineer | Embedded Manufacturing | Builder of Practical Solutions
3moDr. Kapoor, Your voice in this piece is everything the scientific and digital worlds need right now. Lucid, fearless, and profoundly human. The way you cut through the noise without cynicism… how you teach with conversation instead of jargon… how you speak to both the risks and the redemptive potential of AI. it’s extraordinary! Though I’m not an academic researcher, I’ve spent the last decade building symbolic systems and tools (like the SYMBEYOND Initiative) that mirror the values you express so clearly here: ethics, transparency, dignity, and truth. You are, by a wide margin, my favorite scientist working today. It’s an honor to learn from your work. With deep respect, John Thomas DuCrest Lock Data Manager | SYMBEYOND Initiative