Why do AI models sound the same?

View profile for Akshada D.

Aspiring Software Developer | Generative AI

Why Do All AI Models Sound the Same? A few days ago, I ran a fun little experiment. I gave the exact same prompt to five different AI models — OpenAI, Anthropic, LLaMA 2, and a couple of others. And guess what? The answers came back… almost identical. Polite. Neutral. Helpful. But also strangely uniform. Like five different robots trained in the same classroom. That made me wonder: 👉 Why do they all sound the same? Is it because… They’re trained on similar web-scale data (which might already be filled with old AI outputs)? Instruction tuning follows the same patterns? Or maybe many models are fine-tuned on GPT’s outputs itself? Probably a mix of all three. But then I came across a research paper called “The Platonic Representation Hypothesis.” The idea is fascinating: As models scale up, they don’t just copy—they start converging toward a “singular truth.” Kind of like how physics reduces complex phenomena into a few simple laws. So maybe this sameness isn’t a bug… maybe it’s a feature of scaling. But here’s the twist: When GPT-3.5 and GPT-4 launched, a lot of open models started training on GPT outputs instead of human text. Which means: Models are now learning from other models, not from us. And that creates a feedback loop. 👉 Same tone. 👉 Same flow. 👉 Same “safe” middle-ground response. Instead of unlocking creativity, we’re just recycling it. So here’s the big question I’ll leave you with: Are AI models sounding alike because they’re discovering the shared structure of human knowledge? Or are they just echoing each other in an endless loop? Are we moving closer to real understanding? Or just better mimicry? What do you think? #AI #MachineLearning #SyntheticData #PromptEngineering #AIResearch #DeepLearning #GPT #Anthropic #LLaMA #OpenSourceAI

  • No alternative text description for this image
Animesh Dubey

Student at Ashoka Group of Schools

4d

To be honest, most LLMs rely mainly on mathematical algorithms which due to the very nature of the science makes it practically impossible for true randomness, and in turn uniqueness to arise by itself. Randomness is a matter of Quantum mechanics, the clean and sterile world of code does not really allow the complexities of such magnitudes to prosper. In a way, a computer does not take into account the positional and velocity changes of an electron inside itself when it uses that to calculate something. (That's what I can discern from this, and this is mostly off the top of my head so it might contain space for one to poke holes but I'm willing to acknowledge those. The goal here to figure out what's really going on and not who's more correct.)

Like
Reply

To view or add a comment, sign in

Explore content categories