Exclusive: What’s Changing in AI Procurement? Governance Is Redefining How CIOs Justify Vendor Choices

Pixabay

Exclusive: What’s Changing in AI Procurement? Governance Is Redefining How CIOs Justify Vendor Choices

Artificial intelligence is moving from innovation budgets into enterprise infrastructure. But as companies scale AI across operations, cybersecurity, compliance, customer service and decision-making, a new question is reshaping corporate technology procurement: can vendors prove their AI is trustworthy?

The shift is no longer theoretical. In 2025, drawing on six years of research, McKinsey found that companies are intensifying efforts to mitigate the risks associated with generative models, including inaccuracy, cybersecurity threats and intellectual property infringement. Organizations that are more advanced in AI adoption are also more likely to have established processes for human validation of model outputs.

More recent findings show that the share of respondents reporting mitigation efforts for risks such as personal privacy, explainability, organizational reputation and regulatory compliance has increased since the previous survey. Respondents said their organizations are now taking action to manage an average of four AI-related risks, up from two in earlier findings.

At the same time, 51% of respondents from organizations using artificial intelligence technologies report that their companies have experienced at least one negative consequence.

In parallel, IBM’s 2025 Cost of a Data Breach Report highlights the growing governance gap around AI.

The study found that 63% of organizations lacked artificial intelligence governance policies to manage AI systems or prevent the proliferation of shadow AI, while only 37% had formal approval or oversight mechanisms in place.

Among organizations that reported a machine learning–related security incident, 97% did not have adequate access controls in place. IBM also estimated the global average cost of a data breach at $4.4 million. At the same time, organizations that extensively deployed AI in security achieved average cost savings of $1.9 million compared with those that did not.

Now, CIOs are responding by bringing AI ethics directly into vendor selection.

A New Era in AI Procurement

According to Thomas Prommer, a technology executive and AI expert with more than 30 years of experience, who has led over 1,000 engineers at organizations including Adidas, Sweetgreen and Huge, several large organizations are preparing a new model for engaging with their suppliers, anchored in more robust governance strategies. These measures include stricter data traceability, supplier diversification and growing audit pressure.

The insights were gathered over the past three years through direct experience and ongoing conversations with peers, including leaders from organizations across sectors such as retail, finance, legal and healthcare.

According to the source, AI governance is rapidly being embedded into core procurement decisions in ways that would have been unlikely just a few years ago.

“First, ethics has moved from a procurement checkbox to a board-level scorecard. What was once a largely symbolic ‘AI ethics’ section in vendor questionnaires, often overlooked, is now a weighted evaluation criterion alongside security and service-level agreements. CIOs are increasingly being asked to justify their vendor choices directly to audit committees,” Thomas said.

Thomas Prommer, technology executive and AI specialist with over 30 years of experience leading large-scale engineering teams
Thomas Prommer, technology executive and AI specialist with over 30 years of experience leading large-scale engineering teams

“Second, data lineage has become a new baseline requirement. CIOs are now demanding that vendors document data provenance, model update cadence and red-teaming results, effectively turning ‘show me your training data lineage’ into the new ‘show me your SOC 2.’ Vendors unable to provide this level of transparency are losing deals they likely would have secured as recently as 2024,” the source continued.

“Third, multi-vendor AI strategies are emerging in response to diverging model strengths. The growing distinction between providers such as OpenAI and Anthropic is now reflected directly in RFPs, with enterprises increasingly hedging their bets, selecting one provider for speed and performance, and another for governance and risk posture. AI procurement is no longer a single-vendor decision,” concluded Thomas Prommer.

AI Governance Model

In 2024, the European Union introduced the AI Act (Regulation (EU) 2024/1689), the world’s first comprehensive legal framework governing artificial intelligence.

The regulation establishes harmonized rules designed to promote the development and deployment of trustworthy artificial intelligence across the region.

Built on a risk-based approach, the AI Act sets differentiated obligations for developers and deployers depending on the level of risk associated with specific use cases.

It forms part of a broader policy architecture aimed at strengthening Europe’s intelligent technologies ecosystem, including the Artificial Intelligence Continent Action Plan, the Innovation Package for intelligent systems and the rollout of technology-focused innovation hubs.

Together, these initiatives are designed to ensure safety, protect fundamental rights and advance human-centric systems, while accelerating adoption, investment and innovation across the European Union.

Trust and Governance Metrics

For corporations, the shift is reshaping procurement dynamics, with trust increasingly treated as a measurable factor in vendor selection.

Providers that once competed primarily on product performance, integration speed and pricing are now being evaluated on their ability to demonstrate responsible AI controls. These include model transparency, data protection, explainability, risk monitoring and incident response, as well as alignment with established frameworks such as the NIST AI Risk Management Framework, which is designed to help organizations integrate trustworthiness into AI systems.

The shift in procurement reflects broader concerns around the governance of AI systems as adoption accelerates. Cisco’s 2026 Data Privacy Benchmark Study indicates that AI ambition is outpacing organizational readiness, with companies assuming greater responsibilities related to governance, ethics, transparency, explainability and contractual accountability.

Against this backdrop, artificial intelligence governance is increasingly moving beyond policy frameworks and becoming embedded in procurement decisions.

In this environment, enterprise intelligent systems providers are being evaluated not only on model performance, but also on their ability to demonstrate reliability, transparency and trustworthiness in real-world deployments.

In other words, the winning enterprise technology vendors may not be those with the most advanced models, but those best positioned to answer a more fundamental question: why should companies trust them?

Facebook
Twiter
LinkedIn
Picture of Newsroom

Newsroom

More News