🚦 𝗘𝘀𝗰𝗮𝗽𝗶𝗻𝗴 𝗧𝘂𝗻𝗻𝗲𝗹 𝗩𝗶𝘀𝗶𝗼𝗻 𝗶𝗻 𝗟𝗟𝗠𝘀 Here's what researchers just discovered: When large language models reason step by step, they often fall into 𝘵𝘶𝘯𝘯𝘦𝘭 𝘷𝘪𝘴𝘪𝘰𝘯 — the first few words push them down a single (sometimes wrong) path. Just making that chain longer doesn’t magically fix it. 𝗧𝗵𝗲 𝘀𝗺𝗮𝗿𝘁𝗲𝗿 𝗺𝗼𝘃𝗲: 🔀 Run several shorter reasoning paths in parallel. Or simply ask the same question multiple times differently. 🔀 Combine the results (majority voting or summarization). 🔀 Get better answers without retraining the model. Think of it as “parallel brainstorming” for AI. Instead of betting everything on one long chain of thought, let multiple perspectives compete. Sometimes the best way forward isn't to think harder in one direction, but to think differently in many directions all at once. 🤖 𝗪𝗵𝗮𝘁’𝘀 𝘆𝗼𝘂𝗿 𝗴𝗼-𝘁𝗼 𝘁𝗿𝗶𝗰𝗸 𝗳𝗼𝗿 𝗴𝗲𝘁𝘁𝗶𝗻𝗴 𝗺𝗼𝗿𝗲 𝗿𝗲𝗹𝗶𝗮𝗯𝗹𝗲 𝗮𝗻𝘀𝘄𝗲𝗿𝘀 𝗳𝗿𝗼𝗺 𝗟𝗟𝗠𝘀? Source -> https://lnkd.in/ehH4yS7R #AI #GenAI #LLMs #TechStrategy #Parallelism
Mihail Mihaylov’s Post
More Relevant Posts
-
You don’t know how our brain thinks (biologically or technically), right? But you can know how an AI thinks—layer by layer. Think of it like a relay race : First : Words turn into numbers (AI’s language). Next: Attention kicks in—“Which word matters most here?” Next: Layers refine meaning—like zooming in on a photo until it’s crystal clear. Next: Output—AI guesses the next word, again and again, until full thoughts appear. Billions of micro-decisions → one intelligent response. LLMs don’t understand like us humans. But they’re an incredible mirror, reflecting back our patterns of thought. Watch the video 🎥 to see this idea in action. #AI #ArtificialIntelligence #MachineLearning #DeepLearning #LLM #TechExplained #AIForEveryone #FutureOfAI #GenAI
To view or add a comment, sign in
-
In AI, more context isn’t always better — sometimes it’s just expensive noise. Every token counts. Which means how you use the context window is as important as how you use the model itself. The smartest thing is not to dump everything into prompts. Instead: Summarize long histories into concise notes. Compress knowledge into embeddings. Retrieve only what matters at the moment. In one review-analysis pipeline, compressing hundreds of comments into structured insights cut token usage by 60% — while actually improving accuracy. Are you optimizing context for efficiency, or paying for wasted tokens? #AI #ContextEngineering #AppliedAI #FutureOfWork #GenerativeAI
To view or add a comment, sign in
-
Hard truth: AI isn’t here to replace thinking. It’s here to expose how shallow our mental models really are. We celebrate AI’s speed and accuracy, but the real revelation is how often we rely on gut feelings or surface-level judgment. AI forces us to confront the cracks in our cognitive frameworks — the same ones we’ve ignored for decades. Before you throw AI at your problems, ask yourself: how solid is your understanding? Automating shallow thinking gets you faster errors, not better solutions. Deep models matter more than ever because that’s where real insight lives. Sounds harsh? It’s just math — better input, smarter outcomes. Most people get this wrong because they confuse tools with wisdom. Are you ready to rethink how you think? #AI #MentalModels #CriticalThinking #DecisionMaking #DeepWork #ThoughtLeadership
To view or add a comment, sign in
-
-
All very intellectually stimulating but… The solution to this problem solves nothing particularly useful, particularly in the context of Generative AI / Large Language Models being more reliable. The problem discussed here by Horace He and colleagues at Thinking Machines Lab here isn’t *the* problem in achieving reliable Generative AI. If anything, achieving 100% reproducibility in LLM inference (outputs) could lead to *perceived* higher reliability for end users when, in fact, this is not the case. This is a dark pattern. Making something look more reliable and “intelligent” than it really is, to the financial benefit of those selling the solution, and to the detriment of end users. Nondeterminism in LLM inference (i.e. variable outputs) is actually one of things I find useful about LLMs as thinking partners. That’s my spicy take for the weekend. What’s your take? Link to the article in comments. #AGI #AI #GenAI #LLMs Gary Marcus
To view or add a comment, sign in
-
-
Our Chief Product and Technology Officer, Bob Rogers, was recently interviewed by The Daily Upside. 💡 In his interview Bob explains why the hype around artificial general intelligence often overshadows the real value that businesses can gain from smaller purpose-built AI models. Large language models are powerful and impressive but they still struggle with accuracy and reasoning. That is why organisations often see better results with AI solutions that are designed and tuned for their specific needs. Thank you Nat Rubio-Licht! Read the full interview here: https://lnkd.in/gtK8kDm4 #OiiAI #AI #ArtificialIntelligence #TheDailyUpside #TechLeadership #Innovation #EnterpriseAI #GeneralIntelligence
To view or add a comment, sign in
-
-
RAG vs. Fine-Tuning: It's one of the most debated topics in AI. Here’s a simple breakdown of when to use each. Confused about whether to use RAG or Fine-Tuning for your AI project? Let's clear it up with a practical guide. 🔵 Use RAG (Retrieval-Augmented Generation) when you need to: Access real-time or frequently changing information. Eliminate model 'hallucinations' by grounding it in facts. Cite specific sources for its answers. Implement a solution quickly and cost-effectively. Think: Factual Knowledge & Accuracy. 🟠 Use Fine-Tuning when you need to: Teach the AI a very specific, nuanced style, tone, or format. Change the core behavior or 'personality' of the model. Embed specialized knowledge that is static and stylistic. Handle complex, domain-specific instructions. Think: Specialized Skills & Behavior. They aren't mutually exclusive, but knowing where to start is key to building effective and reliable AI. Did this clear things up for you? What other AI topics are you debating? Let's discuss in the comments! 👇 #AI #TechDebate #ArtificialIntelligence #RAGvsFineTuning #MachineLearning
To view or add a comment, sign in
-
-
Sometimes the best way to get better answers is simple: Ask AI to “think step by step.” This approach is called Chain of Thought. It helps break down complex problems, makes the reasoning clearer, and often leads to more reliable results. Just like in school: The process matters as much as the answer. #AI #PromptEngineering #LearningJourney #ArtificialIntelligence #ChainOfThought #GenerativeAI #AIForEveryone #DesignMeetsAI #FutureOfWork
To view or add a comment, sign in
-
AI's not 'reasoning' at all - how this team debunked the industry hype Researchers clarified that language models' 'chain of thought' does not equate to reasoning, revealing the limitations of current AI capabilities in understanding context and logic. Monitor accuracy rates and user engagement metrics to assess the practical implications of these findings on AI applications. https://lnkd.in/eitHpv6Z #NovaSyncInsights #AI #LLM
To view or add a comment, sign in
-
-
Use AI to analyze a company against its peers on the Bloomberg Terminal. 🖥️ With DSX <GO> you can ask complex questions in everyday language and get AI-powered answers from across multiple sources in seconds. ⏲ #BloombergProTips #AI
Use AI to analyze a company against its peers
To view or add a comment, sign in
-
Use AI to analyze a company against its peers on the Bloomberg Terminal. 🖥️ With DSX <GO> you can ask complex questions in everyday language and get AI-powered answers from across multiple sources in seconds. ⏲ #BloombergProTips #AI
Use AI to analyze a company against its peers
To view or add a comment, sign in