Build v Buy in AI: The Decision That Could Make or Break Your Digital Strategy
There is a peculiar ritual happening in boardrooms across the country. Someone spots a shiny new AI application at a conference, gets excited about its potential, and within weeks the procurement team is evaluating vendors for a six-figure purchase. Meanwhile, that business analyst who loves building stuff, who built the Excel model everyone relies on could probably solve the same problem using the tools the company already owns...if anyone bothered to ask them.
We have developed an almost Pavlovian response to business challenges: see problem, half understand said round hole problem, see expensive square shaped peg solution, buy expensive solution. But here's the question nobody is asking: for AI, should we build our own solution?
Most organisations skip straight to procurement without ever having the conversation.
This decision matters more than ever. Get it right, and you accelerate innovation whilst controlling costs. Get it wrong, and you could find yourself with an expensive white elephant or locked into a vendor relationship that stifles growth.
The Economics Are More Simple Than You Think
Most organisations start with cost as their primary consideration, but the arithmetic is trickier than it looks. Building in-house used to mean significant upfront investment: hiring scarce AI talent, purchasing infrastructure, and the opportunity cost of delayed deployment. You probably think a custom AI solution might take two years before delivering any business value. (Spoiler: You would be wrong).
Meanwhile, you probably feel confident proven vendor solutions can be deployed in weeks or months. The development costs are spread across multiple clients, creating economies of scale that individual organisations simply cannot match.
But.... total cost of ownership tells a different story. Licensing fees recur annually and can be substantial. A successful internal build has lower marginal costs over time. No licence fees, reusable components, and complete control over the roadmap. And below I'm going to argue could well cost you less today.
The key to getting the build vs buy question right is to complete a rigorous cost-benefit analysis that includes all direct costs (development team, licences, cloud compute) and indirect costs (project delays, opportunity cost of deploying AI sooner rather than later). Factor in contingencies too: budget overruns on internal projects or vendors charging extra for scaling up.
Be objective on risk - sure, you have a piece of paper which means you can sue or blame the vendor. But how often would you actually do it? And if you did, would you get any recompense?
And to do this properly and fairly, you need to know what the 'in-house/build' option actually is. Its probably neither what you assume, nor what your CIO is telling you.
I like to be a bit of a 'Del boy'. I also consider the upside: could this become more than just an internal tool? You are building IP that could become very valuable.
So....some considerations to bear in mind for the next time you have to make this decision....
Customisation: My 80% Rule
Here's my finger in the air approach: if a commercial solution meets about 80% of your requirements out of the box and can be configured for the rest, buying is likely more efficient.
You should be willing to adapt some internal processes to the tool's proven practices.
However, if your processes or data are highly unique, an in-house build might be justified. The middle path is increasingly common: use a general-purpose model as a base and customise it with your proprietary data. Many firms have now taken this approach, starting with general AI models but fine-tuning them with their own knowledge bases and rules.
This hybrid approach offers flexibility without building the core engine from scratch. The question becomes: does the vendor allow enough configuration, or will you need truly custom development?
Regulation
AI applications are coming under increasing regulatory scrutiny. Whether you build or buy, accountability for compliance rests with your organisation. The EU's AI Act represents the first comprehensive regulation on artificial intelligence, imposing strict requirements on "high-risk" AI systems.
If buying, due diligence on the vendor is essential. Can they contractually commit to regulatory requirements? Will they provide audit trails for AI-driven decisions? Many AI vendors are currently startup firms, and this brings specific risks that need evaluation, many times they have neither had the time, cash, or knowledge to think this through.
But also, if building in-house, you need internal capability to meet regulatory expectations. This includes involving compliance and risk functions early, documenting model development processes, and building explainability into the model where required.
A key compliance area is model transparency. If a regulator asks how your AI makes decisions, you must be able to provide an answer. With third-party products, this can be challenging if the model is a "black box" or is just a skin to the ChatGPT API.
You are going to need to upskill your teams either way on what AI is, and what the risks are. My experience is you can train your people in the core concepts to a good level in a couple of days. Once you take away the fluff, and the maths, and the B*s, AI is conceptually quite simple.
Data
AI systems inevitably handle sensitive data. Using a third-party service often means sending data to external systems or cloud environments. Where is the data stored and processed? Who can access it? Is it encrypted in transit and at rest?
When you hand over your data to a vendor, you are also handing over control of how it's managed, categorised, and potentially used. You might find your proprietary customer insights being used to train models that benefit your competitors, or discover that extracting your data when the contract ends is more complex and expensive than you anticipated. You just don't know. Data portability sounds straightforward until you realise your information has been transformed, enriched, or integrated in ways that make clean extraction nearly impossible.
Then theres also the challenge of data governance and lineage. To make the most of AI, your in house data needs to be improved. Good news, tools like Microsoft Purview help organisations tag, classify, and track their data assets, ensuring you know what data you have, where it lives, who's using it, and how sensitive it is.
Building in-house means data stays within your own systems, where your governance tools can continue to monitor and tag it appropriately. You maintain complete 'purview' (no plug or pun intended) over who accesses what, when, and for what purpose. Your data governance policies apply consistently, and you're not dependent on a vendor's interpretation of data handling requirements.
However, building doesn't automatically solve privacy concerns: you still need to implement strong security and ensure your development team follows privacy-by-design principles.
The good news is that cloud providers now offer enterprise-grade solutions that combine the best of both worlds. Services like Microsoft Azure OpenAI allow you to use AI models within your own cloud tenant, so sensitive data doesn't leave your controlled environment, and your data governance tools can continue to do their job.
Most people I speak to don't know this.
Maintenance
Once deployed, AI systems require monitoring. If you have fine tuned a LLM, or built on more traditional machine learning models, then they need retraining as data patterns evolve, new regulatory rules must be incorporated, and any errors or drift in model accuracy need correction.
With an in-house build, you take on full responsibility for maintenance. This means having internal capability to monitor model performance, recalibrate the AI over time, fix bugs, and adapt the system to changing business needs. But, heres the thing, clever people who are genuinely good at this tend to be earlier in their career or in academics. So cheap.
By buying, you shift much of the maintenance burden to the vendor. Vendors focused on AI invest in keeping their products state-of-the-art and compliant. However, you'll still need internal effort for vendor management and solution upkeep, and you may get frustrated by the speed of change, features that never arrive on roadmaps etc.
There is also an important risk that I need to shout about....AI startups can pivot or go out of business. If a vendor discontinues a product, you could be left without updates or with a product that slowly becomes obsolete. If they hike their prices, you have no choice but to go along with it.
So think through the contractuals very carefully, and make sure they work for you if the worst happens.
A Framework for Decision-Making
To evaluate build-versus-buy choices effectively and fairly, I think you should assess six key areas:
Strategic alignment: Is this capability central to competitive advantage or is it a utility function? Could it become a product in its own right?
Requirements and fit: How well do market offerings match your needs? Can you adapt processes to fit available tools?
Cost-benefit and timing: What is the whole-life cost including realistic contingencies? Which option delivers value quickest? Can I make money from the IP?
Integration and architecture: How does each option fit your target architecture? Which leads to a more modular, future-ready landscape?
Decision governance: Who is accountable? What are the service levels, performance reviews, and exit plans?
Risk and compliance: What specific risks are tied to each option? Can they be adequately mitigated?
The DIY Approach. Build, and buy carefully.
The evidence I have seen and that I am convinced by increasingly points to a hybrid approach as the optimal path, and here is where it gets really interesting for organisations looking to democratise AI capabilities.
Picture this: your organisation purchases a mature AI model.... say, a large language model from Microsoft Azure OpenAI or a computer vision platform from Google Cloud. These are the heavy-lifting engines, built by teams of PhD-level researchers with massive computational resources. You're not recreating the wheel; you're buying the wheel from the best wheelmakers in the world.
But, instead of using these models as-is, you build lightweight, domain-specific applications on top of them using low-code platforms. Think Microsoft Power Platform, Salesforce Einstein, or even open-source tools like Streamlit.
I bet most of the apps you see that are wowed by aren't actually that complex in reality. So building them yourself isn't as hard as you think.
This low code approach transforms AI from an elite technical capability into something your business analysts, operations managers, and subject matter experts can work with directly. Suddenly, the person who best understands your customer service challenges can build an AI-powered chatbot. The finance manager who knows your budgeting pain points can create an intelligent forecasting tool.
Real-world example: An large insurer I previously worked with bought access to a natural language processing model but used Power Apps to create a custom interface where staff could ask questions about policies in plain English. The AI model handled the language understanding, but the business logic, user interface, and integration with their current application stack was built by their operations team (guided by yours truly) using drag-and-drop tools.
To me, this hybrid model offers several compelling advantages:
Speed to value: Instead of waiting months for IT to scope, develop, and deploy a custom solution (and get it wrong so lots of back and forth), business users can prototype and iterate in days or weeks. The AI model provides the intelligence; the low-code platform provides the delivery mechanism.
Business ownership: The people who understand the problem best are also the people who build the solution. This eliminates the traditional game of telephone between business requirements and technical implementation.
Cost efficiency: You're not paying enterprise software prices for standard functionality, nor are you paying consultant rates for custom development. The expensive bit (the AI model) is shared across multiple use cases, whilst the cheap bit (the application layer) is tailored to your specific needs.
Continuous improvement: Business users can modify and improve their AI applications as they learn what works. No change requests, no development sprints, no budget approvals for minor tweaks.
Risk distribution: If it does not work out, you haven't blown your entire budget. You can experiment with multiple approaches and double down on what succeeds.
IP ownership: Every solution you build adds to your intellectual property portfolio. The AI application you create to solve your own problem might solve the same problem for hundreds of other organisations. I've seen companies turn internal AI tools into seven-figure revenue streams by productising what they built for themselves.
The citizen developer movement is perfectly positioned to take advantage of this approach. These are your Excel power users, your SharePoint site administrators, your people who already solve business problems with whatever tools they can get their hands on. Give them access to AI capabilities through familiar interfaces, and they'll create solutions you never imagined.
After all they know your business and your customer needs inside out.
But, and this is crucial, this is NOT about replacing your IT team or abandoning governance. The hybrid approach actually strengthens both. IT focuses on the infrastructure, security, and integration backbone that makes everything possible. Meanwhile, they provide guardrails and templates that ensure citizen developers can innovate safely within approved boundaries.
Consider how this plays out in practice:
Each solution leverages the same underlying AI capabilities but delivers value in ways that no generic, off-the-shelf product could match.
The key is establishing the right foundation: robust, scalable AI services that can be consumed through simple APIs, combined with low-code platforms that make those APIs accessible to non-technical users. Add proper governance frameworks to ensure security and compliance, and you have a recipe for sustainable AI innovation at scale.
The Bottom Line: Think MVP, Not Monolith
Neither building nor buying is inherently superior. The optimum path depends on disciplined assessment of strategic fit, whole-life cost, compliance obligations, and your organisation's ability to manage risk over the full lifecycle of the solution.
But my learning has been the biggest failures come from thinking too big too soon.
Consider the traditional enterprise software playbook. You spend months defining requirements, evaluating vendors, and negotiating contracts. You end up with a comprehensive platform that costs hundreds of thousands, sometimes millions, and comes loaded with features you'll never use. Worse, you find yourself adapting your business processes to fit the software rather than the other way around. Requirements that seemed crystal clear during procurement turn out to be slightly off-target when you start using the system in anger.
The low-code hybrid approach flips this on its head. Instead of betting big on a single platform, you can build safe, useable prototypes quickly.
Start with a minimum viable product that solves one specific problem for one team.
Learn what works.
Iterate. Expand.
Example: Rather than buying a £200,000 AI-powered customer service platform that promises to handle every conceivable scenario, build a simple chatbot using Power Virtual Agents and Azure that answers the ten most common questions your support team gets. Deploy it to a subset of customers, or even as I saw with another insurer, deploy it to your customer service agents to make their job easier. Measure the impact. Refine the responses. Add more scenarios gradually.
This MVP approach offers several crucial advantages:
Lower risk: If your first attempt doesn't work, you have only lost weeks of effort, not years of budget.
Faster learning: You discover what your users actually need, not what they said they needed six months ago.
Business alignment: You are building tools that fit your processes, not reshaping your organisation to fit someone else's software.
User adoption: People are more likely to embrace something they helped create and can see evolving based on their feedback.
Cost control: You scale investment with proven value rather than hoped-for benefits.
IP creation: Every successful prototype becomes intellectual property you own completely. That simple chatbot you built to answer customer queries? If it works well, there might be dozens of other companies willing to pay for the same solution.
The most successful organisations IMHO are those embracing this hybrid, iterative model: buying the sophisticated AI capabilities they could never build themselves, then empowering their people to create lightweight applications that solve real problems, one at a time. Some of those applications stay internal. Others become the foundation for new business lines.
The companies that succeed in AI aren't necessarily those with the biggest budgets or the most sophisticated technology. They're the ones that start small, learn fast, and scale thoughtfully. They put AI capabilities into the hands of the people who understand their business problems best, and they give those people the freedom to experiment without betting the company on every idea.
The question isn't just whether you should build or buy AI. It's whether you're ready to embrace an approach that prioritises learning over planning, prototypes over procurement, and business value over feature completeness. In short, whether you are willing to invest in your culture and believe in your teams.
If you do, your people, customers, and pocket will thank you for it.
(He/Him) Partnerships and BD at Estu Global
1moLouis Eastwell …
Managing Founder at Novel Intelligence® Ltd | CFE | Minimising Risk and Improving Financial Performance with Advanced Data Analytics| Academic Instructor | Speaker
1moNice Article Jamie Crossman-Smith, but you can knock on a deaf man's door forever. These pathological behaviours are so prevalent that they seem to result from entirely different priorities and politics, rather than poor understanding. People prioritize 'climbing the ladder while others and especially those outside their clique don't'. In many organisations, signalling bringing change can be more rewarded than actually bringing change...
Transformation Strategist | AI & Change Leader | Story-Driven Thinker & Writer | Your Compass in the Chaos — From Strategy to Sprint
1moI go back to the fundamentals of implementing change into an organisation. From the sound of it, most of the investment was made in tech - not the people and processes that will cater for Ci/CX, training and adoption.
Very well written Jamie Crossman-Smith and I'll defo read this again 😊
Managing Director - Windmill Advisory Services
1moAbsolutely spot on, Jamie Crossman-Smith. I’ve seen this play out in recruitment and finance—big-ticket platforms launched with fanfare, then quietly fizzle due to lack of contextual integration. The irony is, the most successful tools are often low-code, hyper-focused, and crafted by people who knew the problem inside-out. Your point on citizen developers is crucial. The ability to iterate fast with real user insight beats polished vendor demos every time. It's not just build vs buy—it's empathy vs assumption. I'd add that internal ownership isn’t just strategic—it’s cultural. It fosters accountability and evolution