šØ 95% of generative AI pilots deliver⦠zero measurable business impact. MITās āGenAI Divide: State of AI in Business 2025ā report spells it out: only about 5% of enterprise AI initiatives are truly transforming the bottom line. The problem isnāt the AIāitās the integration. Over the past couple years Iāve had the privilege of collaborating with Support and Services leaders to shape an AI for CX strategy. Our key lessons? ⢠Embed AI into workflowsādonāt just add it as an afterthought. ⢠Back-office automation is where the real ROI liesānot always the customer-facing āwowā factor. ⢠Partner smart: vendor-led solutions often outperform internal builds. ⢠Measure P&L impact over adoption metricsāreal success shows up where numbers move. Investors are reacting. AI-linked stocks like Nvidia and Palantir dropped significantlyāNvidia around 3%, and Palantir nearly 10%āas markets question whether AI hype is outpacing actual value. Rather than just reporting gloom, Iām crowdsourcing clarity. The opportunity? Be among the 5%. āø» Questions Iām ponderingāand hoping youāll weigh in on: š¹ How are you measuring success in your AI pilotsābeyond adoption? š¹ What approaches helped you cross the ālearning gapā and land real ROI? Hoping for conversations where accountability meets ambitionā to lead AI adoption from hype to hard results. https://lnkd.in/gwekTtFX
95% of AI pilots fail to deliver. How to be in the 5%.
More Relevant Posts
-
MITās new report shows a hard truth: 95% of enterprise AI pilots fail. Not because the models arenāt good enoughābut because they arenāt integrated into real business workflows. At Grid Dynamics, we see this gap every day. Success isnāt about running more pilotsāitās about embedding AI into the processes that actually drive P&L. Thatās why we partner with platforms like Temporal.io: ⢠Workflow-first. Durable orchestration ensures AI doesnāt sit on the sidelinesāit powers core processes like claims, onboarding, and compliance. ⢠Partner > build alone. MIT found external partnerships succeed twice as often. We bring proven blueprints to accelerate adoption. ⢠Focus where ROI is real. The biggest gains come not from flashy front-end pilots, but from automating the costly, back-office workflows that scale. The winners wonāt be those experimenting the most, but those who build workflow-native, auditable, and scalable AI systems. Curiousāwhere are your AI initiatives today: stuck in pilot mode, or delivering measurable impact?
To view or add a comment, sign in
-
An interesting study. I havent read the full research, so irresponsible for me to comment on this and therefore, I will make a few obvious points... 1. Cos. need to be very clear about their outcomes. In this phase of the diffusion, the goal shd be purely margin expansion through enhanced productivity. Margin/employee shd be the key metric. 2. Revenue enhancements will only happen when AI is used to either find new customer journeys or solve more pain points in current journeys. Not sure if generative AI allows that yet. 3. The use cases in marketing, I feel are just meh at this point, as generative AI isn't at a stage yet, where it will grow the top-line in either creating new markets or growing share. Using DALL-E to generate a visual for a campaign and calling it an AI use case, is as lame as getting drunk on grape juice. 4. Many legacy cos. love to indulge in innovation theatre, in order to use the word AI in their earnings calls, so that they can provide good guidance for the year. The engineers of yesterday's success, often become architects of tomorrow's failure. 5. Geoffrey Moore's work in Crossing the Chasm and the 4 innovation zones is a must read for anybody wanting to integrate tech in their business. The main Q is this - The current AI Capex & Opex is around $500 - $750bn , by the Mag 7, mostly in data centre buildouts. Even at an ROI of 30%, we are expecting monetisation of about a $1tn. Where is that going to come from, especially when most use cases, currently, are about productivity and margin expansion. But hey, the only growth story currently is AI, so just hope this doesnt turn out to be a Donkey, painted with Zebra stripes. PS : Let me book some profits in my AI holdings, before they all go through correction. https://lnkd.in/grA3ZrBP
To view or add a comment, sign in
-
Hereās the uncomfortable truth about AI right now: a lot of whatās marketed as āscienceā is still science fiction. Recent findings out of MIT highlight a stark divide; most generative AI pilots arenāt delivering returns, and only about 5% make it into production with measurable impact. Thatās not doom and gloom; itās a signal to get serious about evidence over hype. What separates science fact from fiction in AI programs? ā”ļø Clear, boring problems > shiny demos Choose workflows with stable inputs/outputs, owned data, and concrete KPIs (handle time, first-call resolution, dollars collected). Demos donāt count, deployed outcomes do. ā”ļø Architecture that can survive Tuesday Production means observability, fallback paths, human-in-the-loop, cost controls, and data governanceābefore the pilot, not after the press release. ā”ļø Change management is the product Winning teams train users, tune with real transcripts, and rewrite workflows around the AIānot just bolt a model onto legacy processes. If youāre evaluating AI this quarter, ask: ⢠Which business metric moves, by how much, and by when? ⢠Whatās the fail-safe when the model is wrong or slow? ⢠What does it takeāpeople, process, and budgetāto run this every day, not just on stage? In a market where only a small fraction reaches production, the advantage goes to leaders who insist on instrumentation, integration, and iteration. Less sci-fi, more shipping. #AI #GenerativeAI #MLOps #DigitalTransformation #ChangeManagement #ContactCenter #EnterpriseSoftware #DataGovernance Ref: https://lnkd.in/gDwFaH3P
To view or add a comment, sign in
-
šŖļø 95% of Generative AI pilots are failing. Thatās not a typo ā itās the wake-up call from MITās latest report https://lnkd.in/dUPfgSrD The culprit? Not the technology itself, but a cocktail of unclear strategies, inflated expectations, and lack of real integration. CFOs and business leaders are discovering that AI isnāt just a plug-and-play tool ā itās a cultural and operational shift. So whatās the way forward? š¹ Start with laser-focused, measurable outcomes š¹ Engage the entire organization, not just IT š¹ Measure impact and adapt in real time At Foreworth, we believe the winners wonāt be those who rush into AI pilots, but those who turn hype into measurable ROI. ⨠The question is: will you be part of the 95%⦠or the 5% that actually delivers?
To view or add a comment, sign in
-
A recent MIT report found thatĀ 95% of generative AI pilots at large companies are failing. Thatās a staggering number, but itās also a massive opportunity. Why? BecauseĀ small to medium-sized businessesĀ are the right size to get this right. SMBs areĀ agile,Ā nimble, andĀ closer to their people and processes. They donāt need a dozen committees to test a new tool. They can move fast, iterate quickly, and adopt AI in ways that are practical, role-specific, and immediately impactful. At ONS Consulting Group, we builtĀ PACE ArcĀ to help organizations of all sizes adopt AI with purpose and precision. But itās the SMBs who often have the clearest path to success, because they can actuallyĀ doĀ what others are stillĀ debating. Letās stop chasing hype and start building value. https://lnkd.in/gzgbHrvt #PACEArc #AIAdoption #SMBLeadership #DigitalTransformation #AgileBusiness #AIForTeams #CopilotStrategy #AIExecution #ONSConsultingGroup #AIWithPurpose #AIOpportunity #SetThePACE
To view or add a comment, sign in
-
95% of enterprise AI projects fail. Thatās not a typo ā MIT just published research confirming what weāve all seen in the field. At Uptima, weāve guided multiple customers through complex AI implementations that didnāt just surviveāthey delivered real outcomes. Thatās a staggering contrast to the industry average. The difference? š¹ We donāt chase shiny tools ā we build fit-for-purpose ecosystems. š¹ We focus on orchestration, not one-off pilots. š¹ We bring both consulting rigor and accelerators to make AI stick. We have validated and reaffirmed that our approach isnāt just different ā itās refreshing. And it works. The takeaway: success in AI isnāt about experimenting. Itās about execution, and showing FAST value. And thatās exactly why we love helping our customers lead in their approaches. https://lnkd.in/eAYz-G_x
To view or add a comment, sign in
-
Iāve seen a lot of takes on the recent MIT report on AI, ranging from āAI is the new cryptoā to āenterprises are hilariously bad at deploying AI.ā Overall I thought the MIT report was an excellent resource - my first instinct reading it was to share it with the team: *Great news for us!*Ā Like many specialized AI players, reading about the importance of customization, deep integrations, and infusing AI into critical business workflows was music to my ears. And the data - such as the 95% failure of generic AI builds - lines up with what weāve seen firsthand. Here are a few highlights from the report for those who havenāt read the full 26 pages yet: - Generic vs. Specialized Tools: Chatbots grew fast early, but without memory or customization, they fail in critical workflows. Checking the box with āwe have an AI chatbotā doesnāt move the P&L needle. - Buy vs. Build: Large enterprises and eager IT teams rushed to build internal copilots in the early days. Not only are internal build efforts significantly more likely to fail. More strikingly, employees are twice more likely to use external tools over internal ones. - AI Governance: Success comes when functional leaders identify clear use cases and drive adoption, with CFO sponsorship once ROI is proven. The slowest rollouts happen when centralized AI strategy teams overanalyze while the market moves on. - Front-Office vs. Back-Office: Front-office tools for Sales & Marketing get the majority of AI budgets, but adopting AI for back-office ops delivers the real, transformative savings. - Enterprise vs. Mid-Market Speed: Enterprises are faster at running experiments, but mid-market companies are faster at achieving transformation. Top performers move from pilot to full deployment in ~90 days. *Our fastest mid-market launch was 10 days from first conversation to global rollout.* Thatās how fast you can move when all the stakeholders are aligned. We are most bullish on the need for specialized AI tools that drive measurable P&L impact. The next chapter of growth will come from taking the right dependencies, implementing AI with a platform mindset, and aligning AI governance with functional leaders. https://lnkd.in/gGpMPtsj
To view or add a comment, sign in
-
If 95% of generative AI pilots at companies are failing, what exactly are the 5% doing right? Check out the first of our blog series discussing the findings of the MIT study and how you can be part of that 5%. Ā https://lnkd.in/e-QKJ_nx
Iāve seen a lot of takes on the recent MIT report on AI, ranging from āAI is the new cryptoā to āenterprises are hilariously bad at deploying AI.ā Overall I thought the MIT report was an excellent resource - my first instinct reading it was to share it with the team: *Great news for us!*Ā Like many specialized AI players, reading about the importance of customization, deep integrations, and infusing AI into critical business workflows was music to my ears. And the data - such as the 95% failure of generic AI builds - lines up with what weāve seen firsthand. Here are a few highlights from the report for those who havenāt read the full 26 pages yet: - Generic vs. Specialized Tools: Chatbots grew fast early, but without memory or customization, they fail in critical workflows. Checking the box with āwe have an AI chatbotā doesnāt move the P&L needle. - Buy vs. Build: Large enterprises and eager IT teams rushed to build internal copilots in the early days. Not only are internal build efforts significantly more likely to fail. More strikingly, employees are twice more likely to use external tools over internal ones. - AI Governance: Success comes when functional leaders identify clear use cases and drive adoption, with CFO sponsorship once ROI is proven. The slowest rollouts happen when centralized AI strategy teams overanalyze while the market moves on. - Front-Office vs. Back-Office: Front-office tools for Sales & Marketing get the majority of AI budgets, but adopting AI for back-office ops delivers the real, transformative savings. - Enterprise vs. Mid-Market Speed: Enterprises are faster at running experiments, but mid-market companies are faster at achieving transformation. Top performers move from pilot to full deployment in ~90 days. *Our fastest mid-market launch was 10 days from first conversation to global rollout.* Thatās how fast you can move when all the stakeholders are aligned. We are most bullish on the need for specialized AI tools that drive measurable P&L impact. The next chapter of growth will come from taking the right dependencies, implementing AI with a platform mindset, and aligning AI governance with functional leaders. https://lnkd.in/gGpMPtsj
To view or add a comment, sign in
-
ā95% of internal generative AI pilots have no measurable impact on profit and loss.ā - MIT/Forbes That stat may sting a little... But itās not shocking. Most companies are swinging AI at vague problems, hoping something sticks... Hoping it solves everything... something. If you haven't been in one of these strategy meetings, I can tell you, it's maddening. BUT! There's good news! The report also found that success rates are twice as high when companies buy from specialized vendors instead of trying to duct-tape their own models into workflows. Thatās the whole point of Steerco! We donāt do āAI for everything.ā We solve one very specific, very painful problem for Customer Success: preparing presentations, success plans, and account reviews without burning hundreds of hours. AI works when itās pointed at something clear and specific. Thatās why our customers see impact fast, because the problem is clear, and the solution is purpose-built. https://lnkd.in/gvdfJTiQ
To view or add a comment, sign in
-
š„ 95% of Generative AI Pilots Are FailingāBut the Real Story Lies Beyond the Hype⦠AI isnāt failingācompanies are! If your LinkedIn feed is anything like mine, youāve probably seen the headline: āMIT says 95% of enterprise generative AI pilots are delivering zero measurable return.ā But why are these projects fizzling out rather than fueling the next wave of innovation? According to MITās āThe GenAI Divide: State of AI in Business 2025ā, the biggest roadblock isnāt the AI itself - itās how companies try to force-fit it into legacy workflows and unrealistic expectations. Most organizations dash straight to flashy marketing or sales use-cases, while missing the sweet spot of back-office automation and thoughtfully targeted implementations. Only about 5% of pilots - the ones backed by narrow focus, smart partnerships, and line-manager-driven adoption are actually producing rapid revenue growth. Startups, especially those led by young founders are breaking through. Theyāre getting from zero to $20 million by starting small, staying agile, and building AI into workflows from the ground up. š The real question is: are enterprises ready to rethink AI as transformation, not decoration? Reference: MIT Report: https://lnkd.in/eek-hUkj
To view or add a comment, sign in