AI in Procurement: Pressing Challenges for Today’s Procurement Leaders

AI in Procurement: Pressing Challenges for Today’s Procurement Leaders

Introduction

Artificial intelligence (AI) is rapidly redefining the procurement function. Surveys show that 64% of procurement leaders expect AI to transform their roles within five years, and a vast majority are investing in or piloting AI solutions. Yet despite this enthusiasm, AI adoption in procurement remains cautious and uneven. In a 2024 global survey of chief procurement officers (CPOs), 92% were exploring AI’s possibilities, but only 37% had moved to piloting or deployment, with many still evaluating the risk–reward balance. The reality is that scaling AI in procurement “will not be without its challenges,” as one study notes – the biggest perceived roadblocks include data quality, data privacy and regulatory matters, supplier volatility, and the complexity of existing technology and processes. In addition, procurement leaders grapple with ethical implications, a shortage of AI-savvy talent, and integration hurdles as they seek to embed AI into legacy systems and workflows. These challenges are pressing and multifaceted. The following sections explore each issue in turn – from concerns over poor data and AI risks to ethics, contract analytics, talent gaps, integration woes, and regulatory compliance – illustrating why they demand the close attention of procurement directors today.

Data Quality: The Foundation (and Achilles’ Heel) of AI

AI systems are only as good as the data they are trained on, and procurement data is notoriously messy. Surveyed procurement leaders have cited data quality as one of the biggest obstacles to AI success and a major internal barrier to adoption. This is unsurprising – procurement data is often fragmented, inconsistent, or incomplete, spread across multiple ERP systems and spreadsheets, which can cripple the performance of AI algorithms. Common data quality problems include inconsistent data formats between systems, inaccurate or missing historical information, and lack of a standard taxonomy for classifying spend and suppliers. If such issues remain unaddressed, they lead to duplicate or conflicting records and “poor-quality data will likely lead to flawed recommendations and suboptimal decision-making” by even the most advanced AI tools. For example, an AI-driven risk model meant to predict supply disruptions is only as reliable as the supplier, inventory and spend data fed into it.

The need for accurate, cleansed data has therefore become a foundational challenge. Procurement generates vast amounts of information – from master data on suppliers and materials to transactional data like purchase orders, invoices and contracts. Keeping all of this clean and consistent is a persistent struggle. Disparate systems and manual data entry contribute to errors like missing fields, inconsistent naming, or duplicate supplier entries. In short, “garbage in” leads to “garbage out” with AI. Procurement teams recognise that without trustworthy data, even the most powerful analytics or machine learning models cannot deliver useful insights. This challenge isn’t new, but it has taken on urgency as organisations look to more advanced AI such as predictive analytics and generative models. Building a solid data foundation – through data standardisation, cleansing and governance – is now seen as critical for AI readiness. Until then, data quality remains a key stumbling block in unlocking AI’s value in procurement.

AI-Related Risks and Security Concerns

Introducing AI into procurement processes brings not only opportunities but also new risks that leaders must carefully manage. One major concern is the accuracy and reliability of AI-driven outputs. AI isn’t a magical “easy button” that always gets things right – algorithms can produce errors, biased results or even entirely fictitious answers (so-called AI hallucinations). Procurement professionals are learning to “trust, but verify” any AI-generated insights rather than take them as gospel. There is a clear need for human oversight to validate AI outputs, because a flawed recommendation (say, mis-scoring a supplier’s risk level or misclassifying a spend) could lead to poor decisions. The risk of overconfidence in AI is real – if users blindly trust automated recommendations, errors might go unchecked. Procurement leaders have noted that information security is perhaps the biggest concern with AI adoption, and they emphasise caution to ensure sensitive data or erroneous outputs do not “make it go the wrong way.” In practice, this means keeping humans in the loop to double-check AI analyses and intervene when something looks off.

Security is another critical risk dimension. Data privacy and cybersecurity threats loom large when deploying AI in procurement. AI systems often require large datasets (including supplier info, contracts, pricing, and possibly personal data) to train or operate, raising worries about how that data is stored and used. For example, the use of third-party AI platforms can trigger fears about exposing confidential information or intellectual property. One procurement executive noted that as a public institution holding sensitive research and student data, they must be “extremely careful” that using AI does not risk exposing that information. Many organisations are therefore restricting what data can be fed into AI tools – central IT teams may forbid using open public AI services for procurement data, instead insisting on vetted, secure enterprise AI platforms. The spectre of cyberattacks also grows with AI. If AI systems interface with financial or supplier data, they become new targets for hackers. Indeed, AI can be a double-edged sword – while procurement teams use AI to detect fraud, criminals can likewise use AI to perpetrate fraud. A sobering real-world example occurred in 2022: cybercriminals compromised a supplier’s email, then used an AI (ChatGPT) to generate very convincing “urgent payment update” messages in the supplier’s native language, tricking an automotive manufacturer into transferring €320,000 to a fraudulent account. Such cases demonstrate how AI-enabled scams and deepfakes can exploit procurement processes, from fake invoices to synthetic voices authorising bogus transactions. The challenge for procurement leaders is to anticipate these new attack vectors and bolster their defences accordingly.

In summary, the rush to adopt AI comes with a mandate to manage risk. This means instituting controls for AI output quality, maintaining robust cybersecurity and data protection practices, and cultivating a healthy scepticism among staff towards AI recommendations. Procurement teams must ensure AI augments – and does not override – human judgment. By addressing these risks head-on (e.g. via pilot testing, output validation, and security protocols), leaders aim to prevent AI from becoming a single point of failure or a source of new vulnerabilities in the procurement process.

Ethical and Responsible AI Considerations

Beyond technical risks, ethical challenges associated with AI are top of mind for procurement leaders. As AI takes on roles in supplier selection, bid evaluations, and spend analysis, questions arise about fairness, transparency, and accountability. AI can unintentionally amplify biases present in historical data or human decisions. For instance, what if an AI-powered sourcing tool consistently rejects suppliers from certain regions or small minority-owned businesses because the training data favours established vendors? This scenario is not just hypothetical – it’s happening today, as one procurement study noted. AI systems trained on past procurement data may replicate entrenched biases, preferring familiar suppliers or those from specific countries, thereby undermining diversity in the supply base. Such algorithmic bias can lead to inequitable outcomes and reputational damage if left unchecked. Bias can creep in through skewed datasets, biased model design (e.g. optimising for cost alone and overlooking sustainability or diversity), or self-reinforcing feedback loops (overlooked suppliers stay overlooked and lack performance data, perpetuating the cycle). Procurement leaders thus face an ethical imperative to ensure AI decisions remain fair and inclusive.

Another challenge is the “black box” nature of many AI tools. Advanced AI algorithms (like deep learning models) often operate opaquely – they might flag a supplier as high-risk or recommend a purchase decision without a clear explanation of why. This lack of transparency erodes trust and contravenes the need for accountability. In procurement, such opacity is problematic: both internal stakeholders and suppliers may rightly ask, “On what basis did the AI make this decision?” If a supplier is rejected by an AI with no explanation, it’s impossible for them to improve or for procurement teams to defend the decision. Transparency (or explainability) is therefore a key ethical demand. However, many organisations have been slow to implement AI explainability tools – some fear revealing too much about their decision models or find existing techniques cumbersome. The result is a tension: procurement teams might rely on AI recommendations without fully grasping their rationale, potentially reinforcing hidden biases or errors in the process.

Accountability and governance also weigh heavily. If an AI-driven procurement decision leads to an unfair outcome or a compliance breach, who is responsible – the tool, the procurement officer, or the vendor who provided the AI system? Many leaders insist that humans remain accountable, but this requires putting proper oversight in place. CPOs like Paul Williams of the University of California emphasise “privacy, accountability and bias mitigation are top of mind” when trialling AI, precisely to set the right guardrails before problems occur. Ethical procurement AI also means safeguarding data privacy. AI thrives on data, but using partner and supplier data for AI analysis raises consent and privacy issues. As an electronics industry report noted, the use of AI in supply networks raises concerns about retention and use of data shared by partners. Companies must be sure that feeding supplier data into an AI tool doesn’t violate confidentiality agreements or data protection laws – an issue that blends ethics with compliance (discussed further below).

In essence, procurement leaders are challenged to balance AI’s efficiency gains with the ethical principles of their profession. They must strive for AI that is responsible – algorithms that are free of unfair bias, decisions that are explainable and transparent, use of data that respects privacy, and outcomes that maintain trust among all stakeholders. Addressing these dilemmas is not easy, but ignoring them could “undermine trust and fairness in supply chains”. As a result, many organisations are establishing AI ethics guidelines, conducting algorithm audits, and involving diverse stakeholders in AI design to ensure the technology aligns with corporate values and procurement’s commitment to fair dealing.

The Promise and Pitfalls of Contract Intelligence

One of the most promising applications of AI in procurement is contract intelligence – using AI to analyse contracts and extract valuable insights. Procurement and legal teams spend countless hours drafting, reviewing, and managing contracts. AI offers to streamline this by automatically reading contract documents, flagging risks, and tracking key terms or obligations. In practice, contract analytics AI can rapidly extract clauses, identify non-standard terms, assess compliance, and even score contractual risks across a large portfolio of contracts. For example, the University of California has been testing an AI tool to review its standard terms and conditions and compare any deviations: if a counterparty alters a clause (say on indemnification or security requirements), the AI flags it and assigns a risk score so negotiators can quickly focus on high-risk changes. AI can also cross-compare legacy agreements – UC’s CPO noted that by feeding multiple contracts with the same supplier into AI, they could finally see which agreement is most advantageous on certain parameters. This kind of intelligent contract review can unlock value by catching hidden risks and ensuring better consistency across contracts.

However, along with this potential come significant challenges. Contracts are often long, complex and written in legal language that can be nuanced. Feeding unstructured contract text into AI may not always yield reliable results – subtle differences in wording can completely change the meaning of a clause, and an AI might miss context that a human lawyer or buyer would catch. Thus, accuracy is a major concern. Even the best contract AI tools can occasionally misclassify a clause or overlook an important exception buried in text. This is why human oversight remains essential. Industry experts warn of a “contract AI paradox”: while automation can cut costs and cycle times by up to 60% in contracting, there is a risk of eroding critical human expertise if people become overly reliant on AI. Organisations must maintain skilled reviewers to verify AI findings, otherwise errors or biases in the AI’s training could slip through and lead to legal or financial trouble. In short, a contract AI might speed up first-pass review, but procurement and legal still need to double-check the machine’s work.

Data quality and security pose further hurdles in contract intelligence. Contracts contain highly sensitive information (pricing, IP ownership, liability terms, etc.), so any AI system handling them must have robust security and data governance. According to one CLM (contract lifecycle management) provider, “for AI and contracts, the main challenges are data quality and security, required human oversight, and AI ethics”. Many companies have thousands of legacy contracts in disparate formats (PDF scans, Word documents, etc.), and prepping this data for AI analysis (digitising, cleaning, organising) is a project in itself. Moreover, AI’s “black box” problem surfaces in contract analysis as well – if an AI negotiation agent suggests certain contract language, the team needs to understand the rationale. Lack of transparency can make it hard to trust an AI’s recommendations during negotiations. Teams might find it challenging to pinpoint why the AI flagged certain clauses as risky or whether it might be missing context. This opacity is part of the “black box dilemma” noted in contract AI usage, and it can hide biases or errors in how the AI evaluates terms.

In response, some organisations are proceeding cautiously – for instance, the UC procurement team is developing their AI tools internally in part to maintain more control and ensure the solutions fit their unique contract landscape. The bottom line is that contract intelligence is a high-impact area for AI, but realising its value requires overcoming data preparation issues, maintaining strict security/privacy controls, and keeping expert humans in the loop. Procurement leaders must ensure that AI becomes a helpful assistant for contract work rather than an unchecked robot lawyer. When done right, AI contract analysis can significantly reduce manual drudgery and catch issues humans might miss, but until the challenges above are addressed, it remains a tool that augments – not replaces – human judgment in contracting.

Talent Gaps and the Evolving Skill Set

AI in procurement doesn’t just require new technology – it demands new skills and talent that many procurement teams currently lack. This talent gap is a pressing concern for leaders responsible for implementing AI projects. Traditional procurement expertise (negotiation, stakeholder management, category knowledge) must now be complemented by data science, analytics, and technical skills to fully leverage AI. In practice, organisations are finding that they need people who can build or at least understand AI models, manage large datasets, and translate business questions into AI solutions. Such hybrid skill sets are in short supply. In fact, a recent survey indicated that as many as 85% of large companies feel there is a shortage of qualified AI talent in the market, and procurement is no exception. Procurement leaders specifically worry about “the shortage of AI talent in data science and software engineering, and poor training in new skills across the broader workforce.”. In other words, there are not enough people who know how to implement AI or work alongside it effectively, and current staff often haven’t been trained in these emerging competencies.

Bridging this gap is made harder by the rapid pace of change. The profile of an ideal procurement professional is evolving: alongside commercial acumen, organisations now seek talent proficient in data analytics, comfortable with AI tools, and capable of critical thinking to interpret AI outputs. In the Hackett Group’s 2025 Key Issues Study, the “changing profile of procurement skills” was identified as a top trend by 56% of procurement leaders, right behind digitalisation and AI itself. Clearly, CPOs recognise that upskilling the team is just as important as installing new technology. They are starting to invest in training programs to build data literacy and AI fluency among existing staff, while also recruiting new specialists (data analysts, data engineers, AI solution architects) into procurement. For example, some organisations have hired data scientists within the procurement function for the first time in order to develop AI models for spend analytics or risk forecasting. This is a significant shift for a field that historically did not compete in the tech talent market.

Nonetheless, closing the talent gap is a long-term challenge. It involves overcoming internal resistance and convincing high-tech candidates that procurement is an exciting career path (one procurement leader noted the need to better communicate that procurement now involves cutting-edge analytics and can attract “high performers”). It also requires budget: hiring or contracting data science expertise can be costly, and training existing employees takes time. Many procurement teams are already stretched thin – a situation compounded by rising workloads and flat headcounts in recent years. Thus, leaders face a dual pressure: deliver quick wins with AI to justify the investment, while simultaneously investing in people so they can scale those wins. The talent challenge ultimately underpins all others: without the right skills in place, issues like data quality, integration, and ethical governance of AI become much harder to address. Forward-thinking CPOs are therefore treating talent development as a strategic priority, knowing that AI adoption will stall unless their teams have the capabilities to use and oversee these new tools. In sum, nurturing the next generation of “procurement technologists” – professionals fluent in both procurement and AI – is an essential, if difficult, task on the journey to an AI-enabled procurement function.

Integration Complexities with Legacy Systems

Few procurement organisations have the luxury of building their technology stack from scratch. Most are dealing with a patchwork of legacy systems, multiple ERPs, and assorted procurement tools accumulated over years. Integrating new AI solutions into this complex landscape is a formidable challenge. Procurement leaders report that existing systems often lack modern APIs or interfaces for AI tools to plug into. For instance, a company might have one ERP for finance, another for procurement, and several niche tools (for sourcing, contract management, supplier information) – all siloed with inconsistent data structures. Introducing an AI platform that needs to draw data from all these sources, or embed into workflows, becomes a major IT project in its own right. As one industry source noted, “integrating AI solutions with existing procurement and ERP systems can be complex and costly” – older systems may simply not be compatible without significant upgrades or replacements. This technological debt can slow down AI adoption considerably.

A related issue is data siloing. When data is spread across unintegrated systems, it’s difficult for AI to get a comprehensive view of procurement activities. For example, spend data might reside in an analytics tool, supplier data in a separate database, and risk indicators in yet another system. Without integration, an AI tasked with, say, identifying strategic sourcing opportunities might miss pieces of the puzzle. Leaders acknowledge that if you have siloed procurement systems now, “the challenge is only going to get worse with AI” – AI systems themselves cannot magically bridge these silos without deliberate data integration efforts. In fact, AI often requires secure, consolidated access to data in order to be effective. Setting that up can mean investing in data warehouses or middleware to connect systems before an AI solution can even be deployed. This is why complexity of existing technology is frequently cited as a barrier; many procurement functions first need to modernise their IT infrastructure (or work with IT departments) as a precursor to meaningful AI implementation.

Integration challenges also extend to process and people. Deploying an AI assistant in procurement means altering workflows: for example, plugging an AI recommendation engine into the purchase approval process, or using a chatbot interface for employees’ procurement requests. These must be carefully woven into legacy processes to avoid disruption. Furthermore, any AI tool chosen has to play nicely with enterprise security and compliance requirements. Often procurement teams pilot a new AI app only to hit a wall when corporate IT or security reviews its integration and data handling methods. All of this can elongate project timelines. It’s telling that many early AI use cases in procurement have been in relatively self-contained areas (like spend analytics or contract review), which can be run as standalone pilots. Scaling beyond pilots to full integration with transactional procurement systems is a much tougher step.

In summary, integration is a non-trivial hurdle on the road to AI-enabled procurement. Legacy systems and siloed data can significantly delay ROI on AI initiatives. Procurement directors must often collaborate closely with CIOs and IT architects to navigate this, potentially upgrading core systems or using middleware solutions to connect old and new. The cost and effort involved can be high, which is why integration complexity appears on almost every list of AI adoption challenges in this domain. Overcoming it will be essential for AI to deliver its promised benefits at scale, rather than remaining stuck in pilot purgatory.

Regulatory and Compliance Challenges

Procurement operates within the bounds of various regulations and compliance standards, and integrating AI into the function raises important questions on this front. Traditional procurement laws (especially in the public sector) mandate fairness, transparency, and accountability in sourcing decisions – principles that could be tested by opaque AI algorithms. Moreover, the regulatory environment around AI itself is evolving rapidly, which adds uncertainty for organisations looking to deploy AI tools. Procurement leaders have identified regulatory compliance as a significant challenge in AI adoption. They must ensure that any AI system or recommendation aligns with existing rules (such as procurement policies, audit requirements, and industry-specific regulations) and can adapt to new legal requirements regarding AI. For example, in public procurement, decisions typically need clear documentation and justification. If an AI cannot explain why it chose Supplier A over Supplier B, it may not meet the transparency standards required by law. The “black box” issue thus isn’t just ethical – it can become a legal hurdle if challenged in a dispute or audit.

Data privacy is another compliance dimension. Procurement often involves personal or sensitive data (supplier financials, contracts containing personal information, etc.), which is governed by laws like GDPR or POPIA. Using AI might entail processing this data in new ways, potentially triggering privacy assessments or restrictions. Indeed, data privacy concerns are ranked alongside regulatory issues as top roadblocks by procurement executives. Organisations must verify that their AI vendors and systems handle data in accordance with privacy laws – for instance, ensuring that personal data used in model training is anonymised, or that data isn’t unlawfully transferred across borders. Non-compliance could lead to hefty fines or legal challenges.

Adding to these existing rules, new AI-specific regulations are on the horizon. Notably, the European Union has been finalising its AI Act, the world’s first comprehensive framework for AI governance. The EU AI Act will impose strict requirements on “high-risk” AI systems (potentially including certain procurement applications like supplier risk scoring or contract analytics), mandating measures like transparency, human oversight, and risk assessments. The Act also carries severe penalties for non-compliance – fines up to 7% of global annual revenue or €35 million, whichever is higher. Such provisions have global impact; even companies outside the EU may need to comply if they use AI in dealings that touch the EU market. For procurement leaders, this means proactively managing AI deployments to meet these standards. As one commentary put it, procurement teams “must proactively manage their AI technology providers and ensure all AI deployments meet the new standards” of regulations like the EU AI Act. Ignorance is not an option: if an AI tool used in procurement is later found to violate regulations (for example, using prohibited types of AI or failing to log decisions properly), the organisation could face legal and financial consequences.

In summary, the regulatory landscape is a crucial consideration when adopting AI in procurement. Leaders must navigate a complex web of procurement laws, data protection regulations, and emerging AI governance rules. This can slow down AI projects – compliance reviews and legal consultations become necessary steps before deployment. It can also constrain what AI is allowed to do (e.g. an AI might technically be able to scrape competitive pricing information, but doing so could breach anti-collusion laws or supplier agreements). Many organisations are responding by establishing internal AI governance committees to oversee compliance and ethical use. The challenge is to strike the right balance: leveraging AI’s benefits while staying firmly within the boundaries of law and regulation. Procurement directors, especially in highly regulated sectors or public entities, must be vigilant that their exciting new AI tools don’t inadvertently put the organisation at regulatory risk. In the era of AI, compliance is not just the legal team’s job – it has become a front-and-centre concern for procurement innovators as well.

Conclusion

AI holds tremendous promise for procurement – from automating routine tasks to uncovering strategic insights – but the journey is anything but simple. Directors of procurement find themselves at the nexus of technology and business, trying to harness AI’s potential while grappling with a wide spectrum of challenges. Data quality issues demand painstaking groundwork before AI can even take off. New risks and security threats must be managed to protect the organisation’s interests. Ethical dilemmas around bias, transparency and trust call for careful governance of AI systems. Even high-value use cases like contract intelligence come with hurdles of accuracy and oversight. At the same time, procurement leaders face internal challenges in the form of talent shortages and change management, as well as external constraints from legacy systems and regulatory demands.

What emerges is a picture of AI in procurement as a double-edged sword – powerful, transformative, yet fraught with complexities. Leaders who understand and openly address these challenges will be better positioned to realise AI’s benefits in a responsible, sustainable way. Those who rush in without laying the proper groundwork risk project failures or unintended consequences. As of 2025, the state of AI in procurement is one of cautious optimism: most CPOs are investing in AI and expect it to become integral, but they are moving deliberately, aware of the pitfalls. The coming years will be a defining period where procurement functions build the necessary data foundations, skills, and governance frameworks to truly integrate AI into their operations. By acknowledging the pressing challenges – from data and ethics to integration and compliance – procurement directors can steer their organisations through the hype to genuine value. In doing so, they will transform procurement not just with technology for its own sake, but with a keen eye on maintaining integrity, managing risk, and upholding the principles that procurement as a discipline has long stood for.

Sources:

The insights and examples in this article are supported by contemporary research and industry analyses, including Deloitte’s 2024 Global CPO Survey, the Hackett Group’s 2025 Key Issues Study, thought leadership from Art of Procurement on AI adoption barriers, expert commentary on AI ethics in procurement, case studies reported by EPS News (University of California’s AI trials), and procurement industry publications on data quality, cybersecurity, and talent gaps, among others. These sources reflect the collective experience of procurement leaders as they navigate the exciting yet challenging intersection of AI and procurement.

Strong summary of AI hurdles. In my experience, data hygiene not algorithms that derails most pilots. Embedding master-data KPIs into buyer scorecards pushed model accuracy up 20 % and unlocked adoption. Your emphasis on change management over shiny tech echoes what seasoned practitioners live daily.

Like
Reply

Very timely. Many teams are piloting AI without fully understanding the integration and compliance implications. This piece outlines the challenges clearly and constructively.

David Graham

Incubating value-adding engagement between solution providers and executive decision-makers at leading companies

1w

AI promises big wins in procurement—but this article rightly points out that the journey is far from simple. Data integrity and ethical use need far more attention.

To view or add a comment, sign in

Others also viewed

Explore topics