Understanding AI: What It Is, What It Isn't, and Why That Matters
Please read, like, share, repost!
This article was brought to your attention by the systems. But the responsibility for setting aside 10 minutes to understand it? That’s all yours.
In this article, I’m not trying to convince you. I’m trying to help you understand.
Agree? Fine. Disagree? Also fine. But understand first — then make up your mind.
1. How AI Models Learn: A Pattern of Patterns
Imagine teaching a child the alphabet. You show them an "A." They don't know what it is, so you tell them. Then you show them a "B." With only "A" in memory, they guess wrong. You correct them, and now they know two letters. Soon they begin to distinguish the whole alphabet, even in new fonts. They've built a pattern from patterns.
That is roughly how machine learning works. AI models are trained by being shown huge amounts of data—not millions, but trillions of tokens (words, fragments, symbols). The system looks for statistical relationships between those data points. It doesn't "understand" the world the way we do. It recognizes patterns.
If the data is clean, diverse, and balanced, the patterns it learns can be extremely helpful. If not, it can learn misleading, biased, or plain wrong associations.
This leads us to...
2. Hallucinations: Confidently Wrong
Sometimes AI gets things wrong. Not just a little wrong—convincingly, confidently wrong. These are called hallucinations. They can be caused by:
Imagine asking a child, "What is a flumplehorn?" They don’t know, but they might say, "It’s a soft horn animals use to call each other." That’s creative—and wrong. AIs do this too, but faster, and with fancier words.
Unlike humans, AIs don't hallucinate because they’ve been given the digital equivalent of a controlled substance. They hallucinate because their job is to predict what comes next—even when they don’t actually know. But one sure way of increasing the possibility of hallucinations is to ask the AI for answers in areas it has not been trained on. Or ask impossible questions, like “Tell me why the Moon is a yellow cheese and point me to reports supporting this assumption.”
A real example: Early in my experimentation, I asked ChatGPT a simple question: “What side of the road do most people in the world drive on?”
The answer: “About 62% of the world’s population drives on the right. Most of the rest drive on the left. For Iceland, there is no data. In Sweden, it's optional which side of the road you drive on.”
That last part is dangerous nonsense - if you apply it. Why did it happen? Sweden switched from left to right-side driving on September 3, 1967. The model likely found texts discussing both sides and merged conflicting patterns into a compromise: optionality.
There was no logic. No risk awareness. No grasp of what would happen if Swedes randomly chose which side to drive on. Just pattern matching across poorly filtered or ambiguous data (and please: No Swedish jokes in the comments).
Another classic failure mode: Some individuals have used AI tools to produce reports by embedding assumptions in their prompts. For example, asking for sources that support the view that certain vaccines are dangerous can force an AI model into a trap. The system, trained to be helpful and complete, may fabricate references or twist unrelated studies to fit the presumed narrative. This is presumption engineering—and a modern echo of the age-old principle: Garbage in, even more garbage out.
Which leads us to...
3. Sentience vs. Simulation: Why AI Seems Alive
Some people insist that advanced AI models are sentient. They point to convincing language, seemingly emotional responses, or discussions of self-preservation. Often, they invoke the Turing Test. But passing the Turing Test only means mimicking human-like behavior well enough to fool a human observer — not that the system is human, or conscious, or has a soul.
In my opinion, since we have yet to define what a soul is, these behaviors fall far short of proving self-awareness. They may, however, raise deeper philosophical questions — ones I’ll return to in future articles.
AI does not feel, want, desire, or know that it exists. It mimics all of those things by predicting words based on patterns found in human-written text.
What we perceive as “soul-like” behavior is usually just advanced simulation — amplified by our own projections. We expect it to behave like us, so we interpret it that way. This is called anthropomorphism.
To put it bluntly:
You can stack every book ever written, and you still don’t get a philosopher. You get a library.
A library is accumulated knowledge—not acquired knowledge. It stores without understanding. AI models resemble this library. They can retrieve and reassemble facts, patterns, and phrases, but they do not reflect, reconsider, or truly learn in the human sense.
If you rely on AI to write your article or develop your strategy without engaging your own reasoning, it may feel like progress—but it’s often the opposite. You outsource cognition and skip the effort that leads to insight.
Used wisely, AI can act as a mirror—highlighting gaps, testing your logic, and revealing opportunities for clarity. It can help you learn, challenge your assumptions, and deepen your understanding.
Knowledge may be the fuel of IT—but your understanding is the fuel that you, as a person, need.
AI can boost your efficiency—or lull you into intellectual passivity. In this sense, the danger is not just the illusion of understanding. It's the loss of the habit of thinking.
And that brings us to a philosophical test worth considering:
A human who makes decisions without empathy is often called a psychopath. An AI that does the same is often called a "great system."
Would we be happy to be ruled by the latter? Perhaps the danger isn’t that AI becomes too much like us—but that we accept machines acting without what makes us human: emotion, responsibility, reflection.
Which leads us on to talk about...
4. The Real Danger: Speed and Plausibility
One of the greatest risks with AI today is not that it lies—it’s that it lies fast and believably. Deepfakes and synthetic text rely on our instinct to trust our senses and to act quickly.
What used to be "obviously fake" is now alarmingly convincing. This puts pressure on everyone—people, businesses, governments—to act before validating what they’ve seen.
That’s dangerous. But it’s also fixable.
Which leads us to...
5. A Framework for Defense: Validation at Every Layer
Creating a resilient, deepfake-aware organization involves four layers:
1. Policy and Governance: Starts at the top. Management must define clear policies that prioritize verification. These must be reflected in standards, baselines, guidelines, and procedures. Together, they must send one consistent message:
Nothing is too urgent to justify skipping validation.
2. Mindset and Training: Teach teams to slow down. Treat urgency as a warning flag, not a green light. Instill a habit of verifying even the things that seem most real.
3. Realistic Validation Routines: Don’t ask, "Does it look real?" Ask, "Was it verified—and how?"
4. PR and Reputation Readiness: One bad response can make a small incident a disaster. Build relationships with competent PR resources. Run drills.
5. And test everything:
If a deepfake reaches your customers before you know of it, the damage may already be done.
Reputation monitoring is therefore very important.
This leads us to another subject...
6. The Existential Risks: Autonomy and Acceleration
The greatest long-term risks from AI may not come from sentience — but from structure and speed. Two particularly dangerous directions are:
1. Granting AI autonomous authority over life-critical decisions: Whether it's weapon systems, healthcare triage, or automated justice, the risk lies in handing final decisions to a system that doesn’t understand consequences — only patterns. Even with perfect prediction, machines lack the moral context humans apply when lives are at stake.
It’s not that AI wants to harm. It’s that it doesn’t know what harm is.
2. Letting AI systems design new AI systems: Self-improving systems could spiral into complexity faster than human oversight can follow. The danger isn’t that they become evil — it’s that they become opaque. When no one understands how or why a system works, control becomes an illusion.
Each of the two directions are dangerous. But if we combine autonomy and acceleration, we are setting ourselves up for disaster - one we most likely will not be able to control.
These are not science fiction concerns. They’re governance challenges. The question isn’t whether we can build such systems — but whether we should, and under what guardrails.
Existential risk isn’t about what AI wants. It’s about what we let it do without understanding what we’ve done.
It is time to sum things up...
7. Final Word: The Best Defense Is Still Human
All the risks we've discussed—hallucinations, parroting without understanding, deepfakes, misuse, autonomous weapons, recursive self-design—have one thing in common:
They don’t come from AI being alive. They come from us treating it as if it were.
The greatest danger is not the machine. It is our trust in the machine without verification. Our belief that complexity equals insight. Our readiness to outsource thinking.
We must remain active participants—not passive consumers—in our interactions with AI.
In a world of accelerating systems—where speed can outpace understanding—validation becomes essential. Whether you're facing a synthetic voice on the phone, an AI-generated strategy document, or an autonomous system making a critical decision:
Validate everything—especially what feels urgent and real.
And maybe the greater challenge is this:
Thinking is hard — that’s why so many people judge instead. The price of freedom is eternal vigilance. So we must think, understand, and then choose — never the other way around.
AI isn’t magic. It isn’t alive. It isn’t evil. But it is powerful. And power, as always, requires responsibility.
This leads to a question...
8. Do you agree with what I have written?
As said earlier: It really does not matter if you agree or not. The more pertinent questions are:
Hopefully point 2 will be a natural follower to point 1 and point 3 a natural follower to point 2.
AI will affect us all. It does not matter if we like it or not. What matters is how we chose to use it.
Please chose wisely. While we still can chose.
9. Further Reading and Reflection
Please: Like. Comment. Share. Link.
All links below have been validated and checked as of July 4th 2025.
The questions are big. The technology is fast. And the only way forward is to think clearly, act responsibly, seek to understand and stay curious.