Why Sam Altman Says ChatGPT Can Be "Bad and Dangerous"
AI is getting smarter. But the man behind it wants us to slow down and think.
It's not every day that the creator of a powerful tool turns around and says, "Be careful, this might be risky." But that's exactly what Sam Altman, CEO of OpenAI, did.
He called ChatGPT bad and dangerous when used in the wrong way. Sounds serious, right? Let's see what he meant.
People are trusting ChatGPT a little too much
During an event at the Federal Reserve Bank in San Francisco on July 9, 2025, Altman shared something that caught attention. He said many people, especially young users, are saying things like, “ChatGPT knows me. I’ll just do what it says.”
He didn’t like that. In fact, he said it felt completely wrong.
And he’s right to worry. If people start depending on AI for big decisions, without thinking twice, that's a problem. It’s like handing your steering wheel to a stranger and hoping for the best.
AI sounds confident but can still be wrong
This is where it gets tricky. ChatGPT gives answers in a smooth, confident way. It sounds correct, even when it’s not. This is called hallucination in AI terms. Basically, it sometimes just makes things up.
Now imagine a software developer blindly copying code suggestions, or a business analyst using ChatGPT’s advice in a client report without checking. That could go very wrong. And once the mistake is out there, it’s hard to pull it back.
Altman gave an example. ChatGPT might beat a doctor in a test, but he still wouldn't trust it with his own medical issues unless a real doctor was involved. And that says a lot.
Be careful what you share with AI
Another big concern? Privacy. Many people share personal details with ChatGPT like it’s a close friend or therapist. But here’s the catch. These conversations aren’t protected like they are with a doctor or lawyer. So if you’re talking about emotional issues or family matters, think twice.
Just because the chatbot replies kindly doesn’t mean your data is safe.
What should we do then?
Don’t stop using ChatGPT. Just don’t treat it like your final decision-maker. Use it like a smart assistant. Let it help you think, not think for you.
Before you act on its advice, especially for serious stuff like money, health or work, double-check. Talk to a human. Talk to your team. Or at least trust your gut.
One last thought
ChatGPT is powerful, no doubt. But it’s not your doctor, teacher or boss. It’s a tool. Like a calculator or search engine. It can guide you, but it shouldn't replace your common sense.
Altman’s warning is simple. AI can help, but blind trust? That’s where danger begins.
Have you ever followed AI advice and later regretted it? Share your story in the comments. Let’s learn from each other.
#SamAltman #ChatGPT #AITrust #DigitalWellbeing #ResponsibleAI #TechOverdependence #OpenAI #ArtificialIntelligence
Sources Referenced:
AI, ML, data & analytic software solution innovator and leader
3wHuman-controlling-the-loop > human-in-the-loop.
Software Developer
3whttps://www.linkedin.com/posts/horacio-useche-45b108273_embryonic-tensor-calculus-applied-to-ai-activity-7353437265898917891-iVSA?utm_source=share&utm_medium=member_android&rcm=ACoAAELB9IcB2zPUlrsQs3mQ4PPtI0YnL23-l_w
What happens when business strategy meets psychoanalysis? | Turning ideas into influence & influence into results | CEO | Visionary B2B Leader | HBR Contributor | Nonprofit Board Member | Top Voice Thinkers360
3w"This is where it gets tricky. ChatGPT gives answers in a smooth, confident way. It sounds correct, even when it’s not. This is called hallucination in AI terms. Basically, it sometimes just makes things up." This is true.