Sivasankar Natarajan’s Post

View profile for Sivasankar Natarajan

Technical Director | GenAI Practitioner | Azure Cloud Architect | Data & Analytics | Solutioning What’s Next

𝐋𝐋𝐌𝐬 𝐚𝐫𝐞 𝐬𝐦𝐚𝐫𝐭, 𝐛𝐮𝐭 𝐝𝐨 𝐲𝐨𝐮 𝐤𝐧𝐨𝐰 𝐡𝐨𝐰 𝐭𝐡𝐞𝐲 𝐚𝐜𝐭𝐮𝐚𝐥𝐥𝐲 𝐬𝐞𝐚𝐫𝐜𝐡 𝐟𝐨𝐫 𝐭𝐡𝐞 𝐛𝐞𝐬𝐭 𝐚𝐧𝐬𝐰𝐞𝐫𝐬? Most people think AI just spits out one response. But in reality, Large Language Models (LLMs) explore multiple possible answers before deciding which one to present. 𝐓𝐡𝐞 𝐭𝐫𝐢𝐜𝐤 𝐥𝐢𝐞𝐬 𝐢𝐧 𝐭𝐡𝐞 𝐬𝐞𝐚𝐫𝐜𝐡 𝐬𝐭𝐫𝐚𝐭𝐞𝐠𝐲 𝐭𝐡𝐞 𝐦𝐞𝐭𝐡𝐨𝐝 𝐭𝐡𝐞 𝐦𝐨𝐝𝐞𝐥 𝐮𝐬𝐞𝐬 𝐭𝐨 𝐟𝐢𝐧𝐝 𝐭𝐡𝐞 𝐦𝐨𝐬𝐭 𝐚𝐜𝐜𝐮𝐫𝐚𝐭𝐞, 𝐮𝐬𝐞𝐟𝐮𝐥 𝐫𝐞𝐬𝐩𝐨𝐧𝐬𝐞. 𝐋𝐞𝐭’𝐬 𝐛𝐫𝐞𝐚𝐤 𝐝𝐨𝐰𝐧 𝐭𝐡𝐞 𝐭𝐡𝐫𝐞𝐞 𝐦𝐨𝐬𝐭 𝐜𝐨𝐦𝐦𝐨𝐧 𝐨𝐧𝐞𝐬 𝐢𝐧 𝐩𝐥𝐚𝐢𝐧 𝐄𝐧𝐠𝐥𝐢𝐬𝐡: 𝟏. 𝐁𝐞𝐬𝐭-𝐨𝐟-𝐍 The model generates multiple complete answers to the same question. A verifier then picks the best one out of the lot. Think of it as asking five friends the same question and choosing the most reliable answer. 𝟐. 𝐁𝐞𝐚𝐦 𝐒𝐞𝐚𝐫𝐜𝐡 Instead of generating full answers directly, the model expands step by step. At each step, it keeps the top-N possible continuations and discards the weaker ones. This allows the model to explore different paths and refine reasoning. 𝟑. 𝐋𝐨𝐨𝐤𝐚𝐡𝐞𝐚𝐝 𝐒𝐞𝐚𝐫𝐜𝐡 This takes Beam Search one step further. The model “𝐫𝐨𝐥𝐥𝐬 𝐨𝐮𝐭” a few steps ahead, evaluates the outcomes, and even rolls back if needed. It’s like playing chess thinking a few moves ahead before making your next move. 𝐖𝐡𝐲 𝐭𝐡𝐢𝐬 𝐦𝐚𝐭𝐭𝐞𝐫𝐬 These strategies are what make AI more than just text generators. They ensure answers are not only fluent but also factually stronger and logically consistent. The better we design these search strategies, the closer AI gets to human-like reasoning. 𝐖𝐡𝐢𝐜𝐡 𝐨𝐟 𝐭𝐡𝐞𝐬𝐞 𝐬𝐭𝐫𝐚𝐭𝐞𝐠𝐢𝐞𝐬 𝐝𝐨 𝐲𝐨𝐮 𝐭𝐡𝐢𝐧𝐤 𝐢𝐬 𝐜𝐥𝐨𝐬𝐞𝐬𝐭 𝐭𝐨 𝐡𝐨𝐰 𝐡𝐮𝐦𝐚𝐧𝐬 𝐫𝐞𝐚𝐬𝐨𝐧 𝐭𝐡𝐫𝐨𝐮𝐠𝐡 𝐩𝐫𝐨𝐛𝐥𝐞𝐦𝐬? ♻️ Repost this to help your network get started ➕ Follow Sivasankar Natarajan for more #AI #LLM #ArtificialIntelligence #AgenticAI

  • graphical user interface, diagram
Paul Ingram

Founder @ CanaryFlow AI | I help businesses scale with AI automation

2w

So basically, AI doesn’t just guess once - it argues with itself until the best answer survives. Sounds pretty human to me 😀.

Like
Reply
Hina Arora

AI-Assisted Personal Branding for Tech Leaders | Helping Tech Leaders & Coaches Build Personal Brand for Professional Growth | Partnered with 15+ CTOs, Directors & Founders | Done-for-You Service | 120K+ Insta | EM @ Jio

2w

Interesting breakdown! I’d say Lookahead Search feels closest to humans, we usually think a few steps ahead, weigh outcomes, and then decide, just like in chess Sivasankar Natarajan

Like
Reply
Divanshu Anand

Founder @ DecisionAlgo | Turning Data into Intelligence, Powered by AI and Data Science | Head of Data Science @ Chainaware.ai | Ex - MuSigman

1w

LLMs do more than generate text. Best-of-N, Beam Search, and Lookahead let them explore multiple paths, refine answers, and think ahead to deliver responses that are accurate and logically consistent.

Meenakshi A.

Technologist & Believer in Systems for People and People for Systems

1w

Types of switching to know about the same for the good 😊

Like
Reply
Mohammad Syed

Founder & Principal Architect | AI/ML Architecture - AI Security - Cybersecurity | Securing AWS/Azure/GCP

2w

Sivasankar Natarajan, Great breakdown of LLM search mechanics! Understanding how Best-of-N compares multiple full answers can really explain why some outputs feel more reliable.

Like
Reply
Jothi Moorthy

CS Manager-Agentic AI Architect@IBM| #42 Favikon Top Creator🔥| 240K+ FB |Thought Leader| Keynote Speaker| Board Member| Podcast Host| Publisher| IBM ATE Top 0.05%| IBM Tech Top 1%| Multiple Patents| Multiple OTA Awards

2w

Really helpful way to see whats happening behind the scenes in AI responses Sivasankar Natarajan

Like
Reply
Ashok Kumar

Kubernetes Cost Optimization & Reliability | MLOps & GPU (Kubeflow, NVIDIA Triton, vLLM) | EKS | GKE | AKS | FinOps | Observability | DevSecOps

2w

Love how this breaks down search strategies 👏. Beam and Lookahead search especially highlight that LLMs aren’t just generating words—they’re actively exploring and pruning reasoning paths. Feels closer to how humans think: test options, discard weak ones, and refine towards the best outcome.

Like
Reply
Mudassir Mustafa

Context Aware DevOps Co-pilot

2w

Love this breakdown, search strategies are the hidden engine behind why LLMs feel smart. The choice of search method can often matter as much as the model itself. Sivasankar Natarajan

Like
Reply
Aakash Verma

Founder @ Aixy Media | I connect AI Companies with Verified Influencers and give Guaranteed Campaign Results with 100% Transparency | AI Brand Growth Specialist | Scale Globally Without the Guesswork

2w

Lookahead search in LLMs makes responses sharper and more context-aware. It helps the model anticipate better, delivering clearer and more accurate outputs. Sivasankar Natarajan

See more comments

To view or add a comment, sign in

Explore content categories