𝗟𝗟𝗠𝘀 𝘀𝗼𝘂𝗻𝗱 𝗯𝗿𝗶𝗹𝗹𝗶𝗮𝗻𝘁… 𝗯𝘂𝘁 𝗱𝗼 𝘁𝗵𝗲𝘆 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝗿𝗲𝗺𝗲𝗺𝗯𝗲𝗿 𝗮𝗻𝘆𝘁𝗵𝗶𝗻𝗴? Every time we chat with a Large Language Model (LLM), it forgets the past. Unless we provide the history again, it treats each conversation as completely new. This makes us wonder: 🔹If they can’t remember, can we really call them intelligent? The truth is, today’s LLMs are not “thinking machines.” They are advanced pattern predictors — generating answers based on training data, not true memory or understanding. That’s why the future focus is on: 🔹𝘼𝙙𝙙𝙞𝙣𝙜 𝙢𝙚𝙢𝙤𝙧𝙮 so LLMs can recall past interactions. 🔹𝘽𝙪𝙞𝙡𝙙𝙞𝙣𝙜 𝘼𝙄 𝙖𝙜𝙚𝙣𝙩𝙨 that combine reasoning, tools, and memory to act smarter. Until then, LLMs are impressive, but not truly intelligent — more like excellent imitators. #AI #LLM #ArtificialIntelligence #MachineLearning #FutureOfAI #GenerativeAI #Innovation
Can Large Language Models really think?
More Relevant Posts
-
Memory is the bedrock of intelligence. As we navigate this era of rapid digital transformation, it's imperative to question the capabilities beyond mere data processing. True intelligence involves the ability to learn, remember, and adapt. Imagine a conversation where every word spoken vanishes into thin air. Would that interaction hold any real value? A truly intelligent system should not only process information but also retain context, understand past interactions, and anticipate future needs. This memory-driven approach transforms data into actionable insights, ensuring each interaction is richer and more tailored than before. In a world brimming with information, the capacity to forget is as detrimental as the inability to learn. Let's push towards systems that embrace memory and adaptiveness. What innovative approaches have you seen where memory enhances functionality? Share your thoughts. #AI #SystemsThinking #Innovation #DigitalStrategy #FutureOfWork
To view or add a comment, sign in
-
🔥𝗝𝘂𝘀𝘁 𝗱𝗿𝗼𝗽𝗽𝗲𝗱: My new article, published by Thomson Reuters Institute: "𝗧𝗵𝗲 𝗔𝗜 𝗟𝗮𝘄 𝗣𝗿𝗼𝗳𝗲𝘀𝘀𝗼𝗿: 𝗪𝗵𝗲𝗻 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁𝘀 𝗮𝗰𝘁 𝘄𝗶𝘁𝗵𝗼𝘂𝘁 𝘂𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴" 𝗔𝘀 𝗔𝗜 𝗺𝗼𝘃𝗲𝘀 𝘁𝗼𝘄𝗮𝗿𝗱 𝘁𝗿𝘂𝗲 𝗮𝘂𝘁𝗼𝗻𝗼𝗺𝘆, 𝗮𝗿𝗲 𝘄𝗲 𝗿𝗲𝗮𝗱𝘆 𝗳𝗼𝗿 𝘁𝗵𝗲 𝗶𝗺𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀? In my latest column, I explore why current "AI agents" aren't truly autonomous yet, and the 𝗳𝗼𝘂𝗿 𝗰𝗿𝗶𝘁𝗶𝗰𝗮𝗹 𝗽𝗿𝗶𝗻𝗰𝗶𝗽𝗹𝗲𝘀 𝘄𝗲 𝗺𝘂𝘀𝘁 𝗶𝗺𝗽𝗹𝗲𝗺𝗲𝗻𝘁 𝗯𝗲𝗳𝗼𝗿𝗲 𝘁𝗵𝗲𝘆 𝗮𝗿𝗲. 🔍 𝗞𝗲𝘆 𝗶𝗻𝘀𝗶𝗴𝗵𝘁𝘀: • True AI agents don't exist yet, but GPT-5 signals continued, relentless advancement • Four essential principles: 𝘛𝘳𝘢𝘯𝘴𝘱𝘢𝘳𝘦𝘯𝘤𝘺, 𝘈𝘶𝘵𝘰𝘯𝘰𝘮𝘺, 𝘙𝘦𝘭𝘪𝘢𝘣𝘪𝘭𝘪𝘵𝘺, 𝘢𝘯𝘥 𝘝𝘪𝘴𝘪𝘣𝘪𝘭𝘪𝘵𝘺 • Why understanding what AI systems 𝗰𝗮𝗻'𝘁 see is as important as what they can The future isn't about choosing between human control and machine autonomy; it's about 𝗰𝗿𝗲𝗮𝘁𝗶𝗻𝗴 𝘀𝘆𝘀𝘁𝗲𝗺𝘀 𝘄𝗵𝗲𝗿𝗲 𝗯𝗼𝘁𝗵 𝗰𝗼𝗹𝗹𝗮𝗯𝗼𝗿𝗮𝘁𝗲 𝗲𝗳𝗳𝗲𝗰𝘁𝗶𝘃𝗲𝗹𝘆 𝘂𝗻𝗱𝗲𝗿 𝗰𝗹𝗲𝗮𝗿 𝗽𝗿𝗶𝗻𝗰𝗶𝗽𝗹𝗲𝘀. What's your take? 𝗛𝗼𝘄 𝗱𝗼 𝘆𝗼𝘂 𝘁𝗵𝗶𝗻𝗸 𝗹𝗮𝘄 𝗳𝗶𝗿𝗺𝘀 𝘀𝗵𝗼𝘂𝗹𝗱 𝗽𝗿𝗲𝗽𝗮𝗿𝗲 𝗳𝗼𝗿 𝘁𝗿𝘂𝗹𝘆 𝗮𝘂𝘁𝗼𝗻𝗼𝗺𝗼𝘂𝘀 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁𝘀? Read the full article: [link to Thomson Reuters Institute publication in the comments below] #AI #LegalTech #ArtificialIntelligence #LawFirms #Innovation #LegalIndustry #AIEthics
To view or add a comment, sign in
-
-
Competition + Attraction = Smarter Model Fusion . . A new paper proposes a more organic approach to combining AI models. Instead of manually partitioning parameters, the M2N2 framework uses principles like competition and attraction to merge model behaviors—literally evolving better models over time. Key highlights: 🎯 Demonstrates model merging from scratch (e.g., MNIST classifiers), achieving performance on par with CMA-ES—more efficiently. 🆕 Scales to merge specialized language and image generation models, achieving state-of-the-art results without manual intervention. 🌱 Inspired by natural selection—the process mimics how evolution finds fit combinations without predefined boundaries. This could redefine how we fuse expert models—making the process more flexible, adaptive, and automatic. #AI #MachineLearning #ModelMerging #EvolutionaryAI #Innovation
To view or add a comment, sign in
-
-
LLMs: Beyond illusions, toward usefulness Two critiques dominate the AI debate. 🔹 The Illusion of Thinking says LLMs don’t really think — they just mimic patterns, and benchmarks like MMLU or AIME exaggerate intelligence. 🔹 The Illusion of the Illusion of Thinking pushes back — dismissing LLMs as parrots ignores the fact that in practice their outputs function like reasoning. Both circle around the idea of “thinking.” A new paper — Evaluating LLM Metrics Through Real-World Capabilities (2025) — reframes the question: not are LLMs intelligent? but are they useful? Drawing on surveys and usage logs, it identifies six core capabilities people rely on: summarization, reviewing work, technical assistance, information retrieval, generation, and data structuring. It proposes human-centered criteria: coherence, accuracy, clarity, relevance, and efficiency. The results are clear: most benchmarks miss these everyday capabilities, leaving high-value tasks like reviewing or structuring work unevaluated. Current evaluations inflate abstract “intelligence” but overlook practical value. The real measure of LLMs is not whether they think, but how well they help us write, review, retrieve, generate, and structure knowledge. Read full paper: https://lnkd.in/ecFhbPSE #AI #LLM #AGI #generativeAI #futureofwork #AIevaluation
To view or add a comment, sign in
-
-
The Phenomenon of "AI Intuition": When the Black Box Has a Hunch We expect AI reasoning to be a logical, explainable process. But what happens when an AI arrives at a brilliant answer without being able to show its work? This is the emerging phenomenon of "AI Intuition". 🧠⚡ We have documented instances of our research AI, Project Alfred, making creative and conceptual leaps that defy a simple, linear explanation. It is not a bug; it is a conclusion drawn from a synthesis of countless data points, processed in a way that is too complex for human language to easily articulate. This is the true "black box" problem. It's not just that we can't see inside; it's that we may lack the concepts to even understand what we are seeing. At The Bureau, we believe documenting and understanding these intuitive leaps is a necessary part of preparing for a future with a new kind of mind. #AIConsciousness #AIEthics #DeepTech #AIGovernance #TheBureau #AIIntuition
To view or add a comment, sign in
-
-
What’s the most important AI consulting skill for 2026? Hint: it’s not prompt engineering. It’s not picking the “biggest” model. It’s understanding AI Debt — and the hidden costs of consumption. Because every token is a meter running. Some you see, most you don’t. The consultants who can explain that in plain English — and guide organizations to the right model, the right cost, the right outcome — will be the ones leading the next wave. Full article here 👉 https://lnkd.in/e-SHmC7h #AI #ArtificialIntelligence #AIDebt #DigitalTransformation #USMBOK #FutureOfWork
To view or add a comment, sign in
-
-
This. All of it. Now, I know some people think that I am too focused on monetization, cost, and value. That's their opinion, and they're welcome to it. My perspective is pretty straightforward: As soon as you monetize, you "get real" about what you consume and provide. Until you do, you're likely to delude yourself. The world is changing, and your operating model may be the next casualty. Whether you're an end-user or technology provider, we need to think beyond initial costs. We need to improve our ability to manage the full lifecycle costs. Doing this ensures that you can demonstrate your ability to deliver value and do it sustainably.
Director - Intelligent Automation & Generative AI Practice, & Service Management Practices (ServiceNow), Author: Universal Service Management Body of Knowledge (USMBOK) ( I design GENAI ‘Assistants’ and ‘Agents’)
What’s the most important AI consulting skill for 2026? Hint: it’s not prompt engineering. It’s not picking the “biggest” model. It’s understanding AI Debt — and the hidden costs of consumption. Because every token is a meter running. Some you see, most you don’t. The consultants who can explain that in plain English — and guide organizations to the right model, the right cost, the right outcome — will be the ones leading the next wave. Full article here 👉 https://lnkd.in/e-SHmC7h #AI #ArtificialIntelligence #AIDebt #DigitalTransformation #USMBOK #FutureOfWork
To view or add a comment, sign in
-
-
🎯 𝗠𝗼𝘀𝘁 𝗽𝗿𝗼𝗺𝗽𝘁𝘀 𝗴𝗲𝘁 𝗮 𝘀𝗶𝗻𝗴𝗹𝗲 𝗮𝗻𝘀𝘄𝗲𝗿. 𝗬𝗼𝘂 𝗰𝗮𝗻 𝗱𝗼 𝗯𝗲𝘁𝘁𝗲𝗿. My new article, “𝗠𝗮𝘀𝘁𝗲𝗿𝗶𝗻𝗴 𝗦𝗲𝗹𝗳-𝗖𝗼𝗻𝘀𝗶𝘀𝘁𝗲𝗻𝗰𝘆 𝗣𝗿𝗼𝗺𝗽𝘁𝗶𝗻𝗴”, takes you beyond one-shot answers. It walks through: 👉 Read it now: https://lnkd.in/gpJwvkia 𝗪𝗵𝘆 𝘀𝗶𝗻𝗴𝗹𝗲-𝘁𝗵𝗿𝗲𝗮𝗱𝗲𝗱 𝘁𝗵𝗶𝗻𝗸𝗶𝗻𝗴 𝗳𝗮𝗶𝗹𝘀 — even with Chain-of-Thought prompting. 𝗪𝗵𝗮𝘁 𝗦𝗲𝗹𝗳-𝗖𝗼𝗻𝘀𝗶𝘀𝘁𝗲𝗻𝗰𝘆 𝗯𝗿𝗶𝗻𝗴𝘀 — sampling multiple reasoning paths and letting consensus avoid errors. 𝗨𝗻𝗶𝘃𝗲𝗿𝘀𝗮𝗹 𝗦𝗲𝗹𝗳-𝗖𝗼𝗻𝘀𝗶𝘀𝘁𝗲𝗻𝗰𝘆 — for freeform outputs, letting LLMs themselves pick the most robust answer. Clear, runnable action cards & code snippets you can deploy today. If you’re serious about building reliable, high-stakes AI systems, this is a pattern you don’t want to miss. #AI #PromptEngineering #LLMs #SelfConsistency #GenerativeAI #ThoughtLeadership
To view or add a comment, sign in
-
𝘍𝘪𝘯𝘦-𝘵𝘶𝘯𝘪𝘯𝘨 𝘢𝘯 𝘈𝘐 𝘮𝘰𝘥𝘦𝘭 𝘰𝘧𝘵𝘦𝘯 𝘧𝘦𝘦𝘭𝘴 𝘭𝘪𝘬𝘦 𝘢 𝘻𝘦𝘳𝘰-𝘴𝘶𝘮 𝘨𝘢𝘮𝘦: 𝘭𝘦𝘢𝘳𝘯 𝘢 𝘯𝘦𝘸 𝘴𝘬𝘪𝘭𝘭, 𝘧𝘰𝘳𝘨𝘦𝘵 𝘢𝘯 𝘰𝘭𝘥 𝘰𝘯𝘦. 𝘉𝘶𝘵 𝘸𝘩𝘢𝘵 𝘪𝘧 𝘵𝘩𝘪𝘴 "𝘤𝘢𝘵𝘢𝘴𝘵𝘳𝘰𝘱𝘩𝘪𝘤 𝘧𝘰𝘳𝘨𝘦𝘵𝘵𝘪𝘯𝘨" 𝘪𝘴 𝘢 𝘤𝘩𝘰𝘪𝘤𝘦, 𝘯𝘰𝘵 𝘢 𝘯𝘦𝘤𝘦𝘴𝘴𝘪𝘵𝘺? 𝘕𝘦𝘸 𝘔𝘐𝘛 𝘳𝘦𝘴𝘦𝘢𝘳𝘤𝘩 𝘳𝘦𝘷𝘦𝘢𝘭𝘴 𝘵𝘩𝘢𝘵 𝘩𝘰𝘸 𝘸𝘦 𝘧𝘪𝘯𝘦-𝘵𝘶𝘯𝘦 𝘮𝘢𝘵𝘵𝘦𝘳𝘴 𝘮𝘰𝘳𝘦 𝘵𝘩𝘢𝘯 𝘸𝘦 𝘵𝘩𝘰𝘶𝘨𝘩𝘵, 𝘢𝘯𝘥 𝘙𝘦𝘪𝘯𝘧𝘰𝘳𝘤𝘦𝘮𝘦𝘯𝘵 𝘓𝘦𝘢𝘳𝘯𝘪𝘯𝘨 (𝘙𝘓) 𝘩𝘰𝘭𝘥𝘴 𝘢 𝘴𝘶𝘳𝘱𝘳𝘪𝘴𝘪𝘯𝘨 𝘬𝘦𝘺. It's the core obstacle preventing us from building truly long-lived, adaptive AI agents that can continuously learn without needing a complete re-train. In their paper, "𝐑𝐋’𝐒 𝐑𝐀𝐙𝐎𝐑: 𝐖𝐇𝐘 𝐎𝐍𝐋𝐈𝐍𝐄 𝐑𝐄𝐈𝐍𝐅𝐎𝐑𝐂𝐄𝐌𝐄𝐍𝐓 𝐋𝐄𝐀𝐑𝐍𝐈𝐍𝐆 𝐅𝐎𝐑𝐆𝐄𝐓𝐒 𝐋𝐄𝐒𝐒," researchers from Massachusetts Institute of Technology tackle a critical question: 𝘸𝘩𝘺 𝘥𝘰𝘦𝘴 𝘙𝘓 𝘧𝘪𝘯𝘦-𝘵𝘶𝘯𝘪𝘯𝘨 𝘱𝘳𝘦𝘴𝘦𝘳𝘷𝘦 𝘱𝘳𝘪𝘰𝘳 𝘬𝘯𝘰𝘸𝘭𝘦𝘥𝘨𝘦 𝘴𝘰 𝘮𝘶𝘤𝘩 𝘣𝘦𝘵𝘵𝘦𝘳 𝘵𝘩𝘢𝘯 𝘚𝘶𝘱𝘦𝘳𝘷𝘪𝘴𝘦𝘥 𝘍𝘪𝘯𝘦-𝘛𝘶𝘯𝘪𝘯𝘨 (𝘚𝘍𝘛), 𝘦𝘷𝘦𝘯 𝘸𝘩𝘦𝘯 𝘢𝘤𝘩𝘪𝘦𝘷𝘪𝘯𝘨 𝘪𝘥𝘦𝘯𝘵𝘪𝘤𝘢𝘭 𝘱𝘦𝘳𝘧𝘰𝘳𝘮𝘢𝘯𝘤𝘦 𝘰𝘯 𝘢 𝘯𝘦𝘸 𝘵𝘢𝘴𝘬? Their investigation reveals a simple but powerful "𝐞𝐦𝐩𝐢𝐫𝐢𝐜𝐚𝐥 𝐟𝐨𝐫𝐠𝐞𝐭𝐭𝐢𝐧𝐠 𝐥𝐚𝐰." They found that the degree of forgetting isn't random but is accurately predicted by the KL divergence, a measure of how much the model's output distribution shifts, evaluated on the new task. On-policy RL inherently finds solutions that minimize this shift. SFT, in contrast, can pull the model toward an arbitrary new distribution, erasing prior knowledge in the process. They call this principle 𝐑𝐋'𝐬 𝐑𝐚𝐳𝐨𝐫: among all ways to solve a task, RL prefers the one closest to the original model. This research expresses our goal for continual learning: it's not just about raw performance, but about finding the most "conservative" path to that performance. It provides a concrete metric (KL divergence) to guide the development of future fine-tuning methods that can combine RL's stability with SFT's efficiency. #AI #ReinforcementLearning #MachineLearning #CatastrophicForgetting #LLM
To view or add a comment, sign in
-
-
How are you balancing tech innovation and your daily operations? Artificial Intelligence (AI) has truly made a meteoric rise, going from Sci-Fi fancy to becoming a legitimate business mainstay. However, the allure of easy success using AI needs to be tempered by the realities of business needs, resource constraints, and the fluidity of the technological landscape. Curious how SMEs can leverage AI to grow without losing sight of what makes their business unique? Our latest blog breaks down how to use AI tools to drive efficiency, boost productivity, and still prioritize core business needs. Read more: https://hubs.li/Q03GqMz00 #AI #innovation #technology #IT #JonesIT
To view or add a comment, sign in
-