Cannibalism in Code: AI said it could replace us. Now it’s begging us to stay.
By Jiten Pant, PhD || Founder-CEO NOBiotics || Board Advisor- Green Scientist
Date: June 2, 2025
Artificial Intelligence promised us progress. Precision. Possibility.
But something quietly terrifying is happening beneath the surface — and no one wants to talk about it.
AI isn’t just learning.
It’s recycling.
It’s feeding on its own output.
It’s forgetting what it means to be trained by us.
This isn’t just a glitch in the system — it’s a warning sign. And we’re sleepwalking into it.
Model Collapse: When AI Becomes Its Own Worst Teacher
Imagine a student who only studies their own old notes.
No new books.
No new conversations.
Just pages of their own recycled thoughts.
That’s what’s happening with generative AI right now.
Instead of training on human-made content — books, research papers, journalism, art — today’s large language models are starting to train on AI-generated content.
And when AIs learn from AIs, something breaks. Accuracy drops. Bias grows. Safety fails.
This slow-motion degradation has a name: model collapse. And the most powerful models in the world — GPT-4o, Claude 3.5 — are already showing signs of it.
Even models connected to the internet, using advanced tools like RAG (Retrieval-Augmented Generation), aren’t immune.
Because guess what the internet is full of now?
More. AI. Slop.
The Feedback Loop of Decay
We’ve created a system where AI content floods the web. That content gets scraped and reused to train future models. Those models output even more of the same. And the cycle continues.
It’s cannibalism in code.
A feedback loop that spirals downward — away from truth, away from safety, and away from anything authentically human.
The Human Paradox
Here’s the paradox: The only way to save AI… Is us the Humans.
We need more original human writing. More art. More research. More real. But AI — the very thing we’re being asked to feed — is destroying the value of what we create.
Writers aren’t getting paid. Artists aren’t getting credited. Researchers are being outranked by machine-written SEO content.
Why keep creating if the machine is rewarded more than the maker?
If We Don’t Act, AI Will Forget Us
This is not science fiction. It’s not a distant threat. It’s a trajectory. And we’re on it.
Unless we:
...we will end up with machines that know everything — except how to think clearly, reason safely, or reflect reality.
We Were Never Meant to Be Replaced
AI said it could replace us. Now it’s begging us to stay.
The future of AI isn’t about more compute or bigger datasets. It’s about remembering who built it — and what happens when we stop showing up.
Let’s not wait for collapse to wake us up.
We can still build something worth trusting. But only if we stay in the loop.
Only if we keep creating.
— If this resonated with you, share it. Someone needs to hear it before they build the next broken model.
About the Author
Jiten Pant, PhD is the Founder and CEO of NOBiotics and Founder and Scientific Advisor at Green Scientist .
He is working at the intersection of biotechnology, regenerative medicine, and digital health to tackle some of the most urgent challenges in healthcare — from antimicrobial resistance to respiratory injury.
With over 17 years of translational research and executive experience across the U.S., UK and India,, Jiten bridges science and strategy. His work spans NIH-funded R&D, patent-backed innovations, and early-stage commercialization.
But beyond the labs and boardrooms, he’s a fierce advocate for preserving what makes innovation meaningful: human creativity, integrity, and purpose.
This article reflects a growing concern he shares with many — that in the rush to build smarter machines, we may be forgetting the intelligence that matters most.
We built AI to extend us. Not erase us. Now it’s time we make sure it remembers that.
Director, Head of Immunology | Drug Discovery & Design Consultant | Immunology | CGT | Cross-Functional Collaboration | Conflict Resolution | Research and Development | Product & Business Development | Kayaker
2moAnd now layer on top of that the knowledge that in published biologics / immunology space around one in 40 papers recapitulate when tested with appropriately powered in vivo group sizes. The human data is dicey at best and we downgrade from there. Basic physics? Docking, energy of interaction. Great, easily testable. Safety & efficacy? Much less tractable.