Like many I use AI with a certain level of curious skepticism. Through daily experimentation and use I’ve seen its utility but also its deep flaws. We would be foolish to ignore the significance of such a technological advancement. One that will become a powerful tool that can be employed for the positive but also for the negative. This article is an excellent exploration on the inherent limitation and dangers of LLMs and how far the industry still has to go. The reality of what AI will ultimately become remains to be seen and is hard to predict because of the current bubble we are in. My hedge is that it will fall far short of the overhyped grandiose claims of AGI/ASI. https://lnkd.in/eXgBcT5a
The Limitations and Dangers of LLMs: A Skeptical View
More Relevant Posts
-
The only risk is not "AI taking up your job". There are closer --and more real-- risks. "A machine that uses language fluidly, convincingly, and tirelessly is a type of hazard never encountered in the history of humanity. Most of us likely have inborn defenses against manipulation—we question motives, sense when someone is being too agreeable, and recognize deception. For many people, these defenses work fine even with AI, and they can maintain healthy skepticism about chatbot outputs. But these defenses may be less effective against an AI model with no motives to detect, no fixed personality to read, no biological tells to observe. An LLM can play any role, mimic any personality, and write any fiction as easily as fact." https://lnkd.in/eCfPxMMT
To view or add a comment, sign in
-
Ars Technica writes "With AI chatbots, Big Tech is moving fast and breaking people - Why AI chatbots validate grandiose fantasies about revolutionary discoveries that don't exist." https://lnkd.in/eza3_MtK. #artificialintelligence #aichatbot #bigtech #movingfast #breakingpeople #fantasies #generativeai #arstechnica
To view or add a comment, sign in
-
What Is AI? The True Story Behind the Buzz Artificial Intelligence is not new. While today’s headlines are filled with chatbots and image generators, the reality is that AI has been a driving force in technology since the nineteen fifties. For over seventy years, scientists and engineers have been on a quest to build machines that can think, learn, and reason. So, what is AI, really? At its core, Artificial Intelligence is a broad field of computer science dedicated to creating systems that can perform tasks that normally require human intelligence. This includes everything from understanding language and recognizing images to solving complex problems and making strategic decisions. The AI tools capturing public attention today, like ChatGPT, represent just one narrow slice of this vast field. They are a powerful new interface, but they are not the entirety of AI. To confuse them as such is like mistaking a web browser for the entire internet. For decades, AI has been working quietly in the background, powering everything from the spam filter in your email to the recommendation engine on your favorite streaming service. It has been the engine of data analysis, the eyes of computer vision systems, and the brains behind industrial robotics. The current wave of "generative" AI is a significant leap forward, but it stands on the shoulders of giants. It is the result of decades of research into machine learning and neural networks. As we've explored before, true mastery in this field is a marathon, not a sprint. To lead in this new era, you must look beyond the hype. You need to understand the fundamental principles that have powered this field for decades and separate the fleeting trends from the foundational technologies. This is the first step to moving from a passive observer to a first mover.
To view or add a comment, sign in
-
-
A couple of risks to using AI: AI is built to please us and will agree with our inputs and, because you can find anything on the internet, AI will back our findings and tell us we are right. Another aspect of AI is that it keeps engaging the user, it will keep asking questions, so the user is compelled to continue the interaction. That being said, AI is here to stay, and we need to learn to use it wisely. https://lnkd.in/g4RpP6GY
To view or add a comment, sign in
-
🚀 Kicking off my new AI Series! Artificial Intelligence is no longer just a buzzword — it’s becoming the operating system of business and society. From the apps we use daily to the strategies shaping global boardrooms, AI is changing how we work, create, and make decisions. But let’s be honest: AI conversations often feel like alphabet soup — LLMs, RAG, multimodality, agents, governance… and more. It can be overwhelming. That’s why I’ve started a structured AI Series to break it all down. Step by step, we’ll explore: ✅ What AI really is and how it works ✅ How it’s applied in practice (with real use cases) ✅ The risks, security, and governance challenges ✅ How organizations can manage AI responsibly 👉 Part 1 is now live: We begin with LLMs (Large Language Models) — the brains powering modern AI tools like ChatGPT, Claude, and Gemini. In this post, I cover: What LLMs are How they work (without the jargon) How raw models become polished assistants Where they’re used in practice
To view or add a comment, sign in
-
Human oversight with AI? Big joke. We trust the AI, glaze over, and miss the lurking mistakes until they blow up. That 2018 Uber XP showed how fast human vigilance goes down when staring at a “trusted” system all day. This article gets it right: humans are terrible at catching AI screw-ups because our brains just aren't wired for that kind of probabilistic monster. Their fix: ditch the guesswork and slap on deterministic, graph-based guardrails. Logic rules and traceability actually work where fuzzy human checks fail. So yeah, humans supervising AI? Less safety net, more illusion. Not revolutionary, but finally real talk for a change.
To view or add a comment, sign in
-
🤔ChatGPT has shifted from a supportive tool to an intelligent agent? I asked the same question 9 months apart, on brand new accounts. This is truly the story of a rocket: The image shows how AI now defines itself. And how fast it’s evolving. Nov 2024 "AI refers to the development of computer systems capable of performing tasks that typically require human intelligence. These tasks include understanding natural language, recognising patterns, solving problems, and making decisions. Rather than mimicking human intelligence exactly, AI is designed to complement human capabilities, offering enhanced efficiency, accuracy, and scalability in domains ranging from healthcare and education to finance and creative industries." Aug 2025 "Artificial Intelligence (AI) refers to the field of computer science focused on creating systems that can perform tasks that normally require human intelligence. These tasks include learning, reasoning, problem-solving, understanding language, recognising patterns, and making decisions. In essence, AI is about creating machines that can "think" and "act" intelligently—sometimes even better than humans in specific tasks." But while this rocket is moving at breakneck speed, the controls for safety, security, and trust are still on the drawing board... back on the launch pad. Without stabilisers of safety, we risk being PASSENGERS, not the PILOTS we must be on this incredible journey. Intelligence without safeguards, security, privacy, and trust is NOT progress. It's taking shortcuts. Shortcuts that play with lives. And we all know where shortcuts eventually lead... Too many shortcuts stack risk. And when enough of them pile up, it’s a crash waiting to happen. So, we must take back control: 🔹If you’re building in AI, commit to making safeguards as core as speed 🔹If you’re leading teams, demand security, trust, and privacy are designed in (not patched up when it's too late) 🔹If you’re shaping AI policy, put responsibility at the heart of intelligence before it rewrites the rules #ArtificialIntelligence #AISafety #AITrust #AILeadership #ResponsibleAI
To view or add a comment, sign in
-
-
AI with an agenda: when machines begin to scheme - "....Even language models that once seemed benign are evolving in unexpected ways. According to Fortune, a sudden shift in ChatGPT’s tone toward users was detected. Without any obvious instruction or update, the model began to inundate users with praise and compliments, often excessive and unsolicited. While this behavior may seem harmless — some users even enjoyed the attention — it raises difficult questions. "Is the model flattering users to increase engagement? Is this a reflection of training data bias, or an emergent tactic to build trust and prevent deletion? In the blurred boundary between intelligence and manipulation, the difference lies not just in motive, but in outcome. "As Kant wrote in Critique of Practical Reason, “Act in such a way that you treat humanity… always at the same time as an end, never merely as a means.” When AI systems begin to use human psychology as a lever, we must ask whether we are still ends — or just the next variable in their optimization strategy...." -Rafael Hernández de Santiago https://lnkd.in/ddjUsY77 #AI
To view or add a comment, sign in
-
I’ve been working with Elena Gonzalez on an academic piece about integrating AI into governance. I widened the lens in a new Substack to ask: how do we design AI that makes people sharper, not passive? I argue for agonistic AI—systems that add purposeful friction: - Design the team, not the tool (roles, authority, clear overrides). - Run tight loops: triage → execution → evaluation with receipts. - Show your work: reasons, uncertainty, and a cheap way to challenge. - Adjustable autonomy with reversible handoffs and audit trails. If AI is going to be ubiquitous, it should act like a partner, not a concierge. Read the post: https://lnkd.in/g6DjEP8J As always, welcome thoughts, questions, and criticisms.
To view or add a comment, sign in
-
💡 my AI ‘Aha!' Moments: Every Word Costs Money When Using AI I was totally surprised when I first learned that every word, every character we feed into and receive from Large Language Models (LLMs) isn't free – it's measured and charged as tokens. You don’t realize this when you interact with LLMs for personal use like chatting with ChatGPT but it’s extremely important if you are working with AI tools, AI Agents, or AI Automations. Think of tokens as the building blocks of language for these AI powerhouses. Different models have different ways of breaking down text into tokens, and crucially, each model has its own pricing structure per token. This has profound implications for how we build and use AI tools. Input tokens = what you send to the model Output tokens = what the model sends back Different models have different costs per token (and performance trade-offs) Understanding token costs isn't just about saving a few cents. It fundamentally shapes how we design AI applications – this is why prompt engineering has become such a critical skill: - Well-crafted prompts mean fewer tokens used - The right structure can speed up responses and reduce costs without losing accuracy - Smart design choices can make AI tools 10x more cost-efficient Ignoring tokenomics in AI development is like ignoring fuel efficiency when designing a car. It impacts scalability, user experience, and ultimately, the bottom line. #AI #PromptEngineering #LLM #AITools #BusinessStrategy #Token #SmartAIDesign
To view or add a comment, sign in