Aishwarya Srinivasan
San Francisco Bay Area
585K followers
500+ connections
View mutual connections with Aishwarya
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View mutual connections with Aishwarya
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View Aishwarya’s full profile
Other similar profiles
-
Daliana Liu
Daliana Liu
Helping tech leaders and senior ICs gain visibility, communicate their value, and achieve their versions of success | Ex-Amazon Sr. Data Scientist
New York, NY
Explore more posts
-
Brij kishore Pandey
Large Language Models (LLMs) may look similar on the surface, but their architectures define their strengths, trade-offs, and use cases. Understanding these differences is key to making the right choices in research and real-world applications. Here’s a deeper look at the four foundational LLM architectures 1. Decoder-Only Models (GPT, LLaMA) -Autoregressive design: predict the next token step by step. -Powering generative applications like chatbots, assistants, and content creation. Strength: fluent, creative text generation. Limitation: struggles with tasks requiring bidirectional context understanding. 2. Encoder-Only Models (BERT, RoBERTa) -Built to understand rather than generate. -Capture deep contextual meaning using bidirectional self-attention. -Perfect for classification, search relevance, and embeddings. Strength: strong semantic understanding. Limitation: cannot generate coherent long-form text. 3. Encoder–Decoder Models (T5, BART) -Combine the understanding power of encoders with the generative power of decoders. -Suited for sequence-to-sequence tasks: summarization, translation, Q&A. Strength: flexible and powerful across diverse NLP tasks. Limitation: computationally more expensive compared to single-stack models. 4. Mixture of Experts (MoE: Mixtral, GLaM) -Leverages a gating network to activate only a subset of parameters (experts) per input. -Provides scalability without proportional compute cost. Strength: massive capacity + efficiency. Limitation: complexity in training, routing, and stability. Decoder-only models dominate today’s consumer AI (e.g., ChatGPT), but MoE architectures hint at the future — scaling models efficiently without exploding costs. Encoder-only and encoder–decoder models remain critical in enterprise AI pipelines where accuracy, context understanding, and structured outputs matter more than freeform generation. The next decade of AI may not be about “bigger is better,” but about choosing the right architecture for the right job — balancing efficiency, accuracy, and scalability. Which architecture do you believe will shape enterprise AI adoption at scale — GPT-style generalists or MoE-driven specialists?
334
47 Comments -
Sam Sur
Why isn’t AI boosting productivity in manufacturing yet? MIT Sloan just explored this in a must-read piece: “The Productivity Paradox of AI Adoption in Manufacturing.” The key takeaway is that we are seeing the J-curve effect in action. In the early stages of AI adoption, productivity often dips, which is normal: 1. Costs rise due to integration, training, and change management 2. Old processes clash with new tech 3. Gains are isolated in pilots or siloed tools It is only after this initial dip, when workflows are redesigned, people are upskilled, and data foundations mature, that the exponential gains begin to take hold. This is the J-curve of AI transformation: Short-term pain leading to long-term advantage. We see many manufacturers give up too early, when they are just before the curve turns upward. Leaders need to set expectations, invest in capabilities, and commit to scaling AI beyond pilots. Successful firms rethink, not just automate, their operations, though automation may be a necessary first step. What stood out to me: “You can’t bolt AI onto legacy workflows and expect future-ready results.” Makoro demonstrates the success of businesses that have accelerated teams through the J-curve, from pilot to productivity through the implementation of AI-native manufacturing systems. Where are you on the J-curve? Early dip, scaling gains, or riding the upswing? #AI #Manufacturing #Productivity #JCurve #DigitalTransformation #MakoroAI #Industry40 #AIStrategy https://lnkd.in/gJarKHpG
65
6 Comments -
Dileep Pandiya
🚀 LLMs + Machine Learning: The New Power Couple in AI 🤖💡 A major shift is happening in the AI space — the powerful integration of Large Language Models (LLMs) and Machine Learning (ML) is transforming how businesses automate, predict, and personalize at scale. 🔍 Why it matters: LLMs bring deep contextual understanding, reasoning, and human-like text generation, while ML brings pattern recognition, data classification, and predictive analytics. Together, they create intelligent, dynamic systems that can handle complex workflows with minimal human intervention. 💡 Here’s how they work together effectively: 🔹 Extract and generate structured features from raw or messy text 🔹 Summarize, label, and clean unstructured data 🔹 Personalize suggestions and recommendations 🔹 Power AutoML systems with prompt-based guidance 🔹 Translate model outputs into understandable insights 🔹 Enable decision-making pipelines that learn and adapt in real time 🧠 Think of LLMs as the brain and ML as the muscle — LLMs interpret, explain, and communicate, while ML powers execution and learning from data. 📈 Why this duo is the future: ✅ Enables scalable, real-time insight generation ✅ Boosts automation with human-like intelligence ✅ Accelerates product development and innovation ✅ Supports smarter, adaptive learning systems ✅ Drives competitive advantage in data-centric industries 📢 What this means for you: If your organization is working with data — this integration isn't just helpful, it's essential. 💥 Future-proof your AI strategy by combining the interpretive power of LLMs with the analytical strength of ML.
185
29 Comments -
Naveen Sharma
Everyone’s chasing bigger models. But what if the future of Agentic AI is actually small? A new paper from NVIDIA and Georgia Tech shows that small language models can be faster, cheaper, and just as good for most structured tasks. The takeaway is simple: • Small models handle routine work efficiently • Large models are best reserved for complex reasoning • A hybrid approach gives you the best of both worlds Scaling AI isn’t about size. It’s about fit for purpose - some things never change as enterprises move onwards. Read the paper here: https://lnkd.in/enTzR74U #AgenticAI #SmallLanguageModels #AI #DigitalTransformation
217
8 Comments -
Peter Bendor-Samuel
As enterprises move beyond system-level analytics, the focus is shifting to what really happens at the user level. That’s where Digital Interaction Intelligence (#DII) comes in. DII tools track granular digital interactions keystrokes, clicks, screen activity to help enterprises identify automation opportunities, boost operational efficiency, and improve the employee experience. Unlike process mining, DII offers a user-centric lens that leads to faster, more actionable insights. In our latest Digital Interaction Intelligence Products PEAK Matrix® Assessment 2025, Everest Group evaluates 18 leading DII software products on our PEAK Matrix®, assessing: ✅ Product vision and innovation strategy ✅ Security and privacy safeguards ✅ Deployment and support capabilities ✅ Commercial model flexibility and ease of integration Automation Anywhere ABBYY Celonis Cyclone Robotics EdgeVerve epiplex.ai IBM KYP.ai Microsoft Mimica NiCE Nintex Optimus Hive | Optimus Hive is a Revolutionary Business Productivity Application! Pegasystems Skan AI Soroco StereoLOGIC Ltd. UiPath Whether you're leading a digital transformation or exploring tools to boost productivity and insight, this assessment offers a clear view of where the DII market is headed. Read on: https://okt.to/tZndAe Get in touch: Amardeep Modi Santhosh Kumar Vadrevu Vershita Srivastava Shreepriya Sinha Niyati Vohra Sudeshna Chandra #DigitalInteractionIntelligence #ProcessAutomation #PEAKMatrix #EmployeeExperience #OperationalEfficiency #EverestGroup
53
3 Comments -
Adam Tornhill
This is an important research paper: Modern AI models often write code that works, but that doesn't mean the code is efficient nor maintainable. And that matters: working code isn't always good code. AI-generated solutions may pass tests while still being fragile, messy, and costly to change later. The key takeaway is that functional correctness alone isn't enough. (Great to see this myth debunked with data). Rather, the authors present a benchmark which also evaluates runtime efficiency and code quality. (I'm honored that they chose CodeScene's Code Health metric to measure AI code quality.). Read the full paper from James Meaden and the Codility team here: https://lnkd.in/dREjBrYs
169
22 Comments -
Debmalya Biswas
Sharing my latest article in AI Advances on #Human-in-the-Loop (HITL) strategy for #AgenticAI Given the unreliability of current #LLMs / reasoning models, studies have shown that a collaborative human-assisted model works much better in terms of task fulfillment today. For #autonomous agents, HITL is also the promise that we make to our #Legal & #Compliance teams to get their go-ahead :) But do we really have them effectively embedded in your agentic #workflows? Given this, the proposal is to integrate humans as a first-class citizen in the agentic #lifecycle, and not only as a supervisor / reviewer. This also implies that appropriate #UI/UX needs to be designed to allow human intervention for the right task at the right time. (Similar to agentic state #checkpointing, the frequency and format used to solicit human feedback matters.) Below is a list of the key human #intervention points to plan for: - Co-plan: Validate & #plan ensuring that the generated plan corresponds to the user intent. - Co-execute: Users can intermittently pause the #execution and give feedback, if the agent’s / tool's response does not comply with the assigned task. - Co-comply: Users can mark the critical and irreversible tasks (e.g., payments), and ensure that right #guardrails have been applied compliant with enterprise policies - before approving the task. - Co-memorize: Refine #memory, reviewing key long-term memory concepts, optimizing storage, ensuring reusability. This is complemented by a continuous improvement module that learns from historical #interactions to optimize future human interventions. (views are my own) https://lnkd.in/epAy95wD
129
12 Comments
Explore top content on LinkedIn
Find curated posts and insights for relevant topics all in one place.
View top content