The Procurement Blindfold
On Monday morning, a procurement board approves an artificial intelligence tool that promises to sort cases, draft replies and save a struggling team hours each week. The demo is immaculate: clean graphs, calm voices, a reassuring slide titled “Responsible AI”. What is harder to see is what the room has not pinned down: the clause that lets the supplier change the model during the contract, the absence of a meaningful audit right, the vague promise of support, the lack of a rehearsed exit plan, the fact that nobody has yet shown frontline staff how the system fails, not just how it shines. By autumn the tool is producing odd outputs, workers are either over-riding it constantly or trusting it too much, and the first complaints have started to reach the public. The board asks when governance broke down. The honest answer is that it did not break down at the end. It was designed, quietly, upstream.
Where the real decision happens
(Part of our Institutional Blindfold series, previous articles can be found at https://neuralhorizons.substack.com/archive )
The usual story about artificial intelligence governance begins too late. It begins at deployment, when staff first click a system, or at harm, when a citizen or customer feels the consequences. In practice, many of the deepest choices are made earlier: in the problem statement, the route to market, the tender questions, the evaluation criteria and, above all, the contract. The Organisation for Economic Co-operation and Development notes that public procurement accounts for about 13% of gross domestic product across its members and warns that poorly governed artificial intelligence adoption can leave public actors with highly integrated systems they cannot independently maintain or monitor. The United Kingdom’s Artificial Intelligence Playbook therefore tells teams to work with commercial colleagues from the start, while Ada Lovelace Institute argues that procurement is not a mere gateway but a key decision point where regulation, governance and public value come together. [1]
Seen through a human lens, this is also where organisations are most vulnerable to familiar distortions of judgement. Novelty bias makes the demo feel like proof. Authority transfer makes vendor confidence feel like expertise. Learned dependence begins the moment a buyer signs for a system it cannot really interrogate. Those patterns are visible in recent fieldwork. In the 2025 Association for Computing Machinery Conference on Fairness, Accountability, and Transparency study of United States cities, procurement staff often did not know what questions to ask vendors and struggled even when answers were provided. The National Audit Office, looking at government technology buying more broadly, found that commercial teams often operate with insufficient digital expertise and that senior decision makers still have a habit of fixing on technology too early, before the underlying business problem is properly understood. [2]
Two procurement worlds, one upstream problem
Public-sector and enterprise procurement are not identical, but they rhyme. In government, a purchase sits inside democratic obligations: transparency, legality, fairness, continuity of service and public legitimacy. Ada Lovelace Institute stresses that public-sector buyers face a higher level of scrutiny because they spend public money and often procure tools for welfare, housing, immigration, healthcare, education or policing, where the stakes are not merely commercial but civic. The United Kingdom’s Data and Artificial Intelligence Ethics Framework is explicit that, even when a supplier builds the system, the public body remains responsible for the outputs and their impacts. That is the crucial public-sector asymmetry: a citizen cannot simply “switch provider” when the state adopts a flawed system. [3]
Enterprises face a different constituency, but not a simpler technology. Their buyers answer mainly to boards, customers, insurers and regulators rather than to citizens, yet they are increasingly procuring systems that are probabilistic, adaptive and difficult to inspect, not ordinary deterministic software. IEEE 3119-2025 treats procurement of artificial intelligence and automated decision systems as a distinct risk-management process spanning problem definition, vendor evaluation, contract negotiation and contract monitoring. Cari Miller’s 2026 framework for enterprise artificial intelligence procurement makes the same point more bluntly: if risks and controls are treated as an afterthought, organisations are exposed before governance has really begun. So the deepest similarity between public and enterprise procurement is structural. Both are deciding, upstream, how much visibility, leverage and internal judgment they will still possess once the system goes live. The deepest difference is whose trust is being mortgaged when they get it wrong. [4]
The contract is the hidden constitution
If procurement is the upstream governance site, then the contract is the hidden constitution. Start with audit and transparency rights. The Artificial Intelligence Playbook says contracts can require suppliers to provide the information categories needed for the Algorithmic Transparency Recording Standard. The Data and Artificial Intelligence Ethics Framework goes further: suppliers must be able to explain how they build the tool, the logic and assumptions inside it, the data used to train it, how accurate it is, what biases are known, how the system performs across relevant groups, and what risks the supplier has identified and mitigated. The same framework says organisations must keep records that make the system auditable and reviewable, establish mechanisms for independent challenge, and ask suppliers to share information about how the system works. This is not decorative paperwork. It is the practical minimum for knowing what has been bought. [5]
Model-change control is the next fault line. The Digital, Data and Technology Playbook notes that some cloud-based services include dynamic models; the thing a buyer procures may not be the thing users are actually relying on six months later. The Artificial Intelligence Playbook therefore says updates must pass through a managed release process, changes should be documented, and organisations should be able to withdraw a release and revert to an earlier version if necessary. The Guidelines for Artificial Intelligence procurement add that impact assessments should be revisited whenever a substantial change is made to the design of the system. Without those clauses, model governance becomes a kind of moving pavement: the buyer thinks it approved one system, while the supplier quietly walks it somewhere else. [6]
Then come exit, interoperability and intellectual property. The Digital, Data and Technology Playbook warns that black-box algorithms can create vendor lock-in because buyers struggle to build on or move away from an existing system with confidence in its outputs. It recommends open standards, appropriate licensing terms and careful treatment of intellectual property, and it stresses the value of open and interoperable data and software because they support documentation, maintenance, reuse and lower long-term cost. It also requires serious exit planning: data and information must be returned, asset and knowledge transfers mapped, responsibilities and milestones defined, and the outgoing supplier’s exit joined to the incoming supplier’s mobilisation or to an in-house handover. Exit rights are not an end-of-contract technicality. They are the price of remaining free enough to govern. [7]
Lock-in is not just technical
Vendor lock-in sounds like an engineering problem, but the deeper issue is institutional control. The Organisation for Economic Co-operation and Development lists both data lock-in and vendor lock-in among the main challenges in public procurement of artificial intelligence. The National Audit Office adds that government is still adjusting to a world in which traditional outsourcing gives way to subscription-based cloud services and to giant suppliers “bigger than governments themselves”. Ada Lovelace Institute’s work on local government procurement reaches the same conclusion from the ground: market concentration and knowledge asymmetries can narrow the field to a few large suppliers, encourage unfair contracting clauses and make long contracts hard to escape. The risk is not merely that the buyer pays too much. It is that the buyer loses bargaining power about evidence, upgrades, failures and future options. [8]
The other half of lock-in is internal capability erosion. The Organisation for Economic Co-operation and Development warns that buyers can be left without the independent capability to maintain or monitor systems. The National Audit Office likewise argues that departments struggle to assemble the right blend of digital and commercial skills, while barriers such as legacy systems and weak data access still limit what government can do with artificial intelligence. The Artificial Intelligence Playbook responds with a plain admonition: teams, senior responsible owners and policy leaders need the skills to understand, maintain and govern what they buy. This is the procurement version of learned helplessness. When a supplier becomes the only party able to explain the model, evaluate its drift, or judge whether it is still fit for purpose, the organisation has not merely outsourced a tool. It has outsourced a slice of institutional judgment. [9]
When responsible procurement becomes ceremonial
This is why “responsible artificial intelligence procurement” can become ceremonial. The ceremony is not the existence of principles; it is the gap between principles and purchasing reality. Ada found that local government lacks a clear and comprehensive account of how to procure artificial intelligence in the public interest and, critically, that there are no clear structures for supplier accountability. Its later work concludes even more starkly that public procurement of artificial intelligence is not fit for purpose. Another Ada warning is “slipstreaming”: artificial intelligence arriving quietly inside ordinary software products and services, which makes it harder even to know where governance should attach. If an organisation does not know what is artificial intelligence, where it is embedded, or how to question the supplier, then “responsible procurement” becomes a badge on the cover sheet rather than leverage over the deal. [10]
The 2025 Association for Computing Machinery paper gives that ceremony a concrete shape. In many cities, low-cost or free tools could be acquired under financial thresholds without a full solicitation, including subscriptions to general-purpose tools and even donated systems. Some purchases made through pre-existing contracts with other governments skipped renegotiation altogether, leaving cities to accept vendor terms as they stood. In some cases, cities had to sign the contract before they could even access the system behind the paywall. One city learned only after the fact that its contract allowed just five support calls. Officials also reported that risks from inaccurate outputs in a chatbot procurement were “not part of the conversation” with the vendor. This is the heart of the problem. A governance regime that applies only to the expensive, obvious or bespoke purchase is not really governing the present market at all. [11]
Public harms then emerge downstream, long after the upstream choices have been naturalised. Ada points to the Horizon scandal and the Home Office’s visa-streamlining algorithm as reminders that public-sector deployment can go badly wrong when complex technologies are adopted without sufficient oversight across their life cycle. The National Audit Office’s wider lesson on digital change is equally relevant: major failures often begin when decision makers fix on the technology before the underlying business need is properly understood. Responsible procurement becomes ceremonial when the buyer can fill in the ethics paperwork but cannot compel disclosure, cannot govern model changes, cannot audit performance in context, and cannot leave with its data and know-how intact. That is not governance. It is governance theatre staged in front of an irreversible contract. [12]
A governance test before you sign
Boards and chiefs of procurement do not need a new slogan. They need a gate. Before any significant artificial intelligence purchase, renewal or material expansion, they should require written answers to seven questions.
1. Problem clarity. Can we explain, in ordinary language, what problem this system solves, why artificial intelligence is justified, and what non-artificial-intelligence alternative we rejected? If not, we are buying fashion, not capability. [13]
2. Transparency depth. Do we have enough information on data provenance, model logic, benchmarks, demographic performance, security and known failure modes to brief a non-executive director, a frontline user and an affected member of the public? [14]
3. Change control. Does the contract require notice, evidence, approval criteria, release logs and rollback rights for material changes to the model, training data or system architecture? [15]
4. Audit and challenge. Do we have rights to documentation, logs, incident reporting, independent review and meaningful testing of the system in our own context rather than only the vendor’s preferred benchmarks? [16]
5. Exit and handover. Can we recover our data, documentation and knowledge assets, switch supplier or bring the function in-house, and keep the service running through transition? [17]
6. Capability retention. Who inside the organisation will monitor performance, investigate drift, judge whether outputs remain defensible and steward the next procurement? If the answer is “the vendor”, the organisation is hollowing itself out. [18]
7. Coverage realism. Do these protections also apply to pilots, embedded features, low-cost subscriptions, donated products and systems bought through pre-existing frameworks? If not, the responsible procurement regime is mostly theatre. [19]
The procurement blindfold is not inevitable. It survives because organisations still treat procurement as administration when, for artificial intelligence, it is closer to constitutional design. The most practical reform is also the least glamorous: move governance upstream, write it into the contract, preserve internal judgment, and refuse any deal that hides its workings behind polished assurances and thin exit rights. By the time an employee or citizen encounters the system, the organisation should already have decided whether it can see, question, change and, if needed, leave what it has bought. If it cannot, then the decisive governance choice has already been made — and made badly. [20]
Bibliography
UK Government. https://digitalgovernmenthub.org/wp-content/uploads/2025/05/2411.04994v2.pdf
UK Government. https://www.gov.uk/government/publications/guidelines-for-ai-procurement/guidelines-for-ai-procurement
UK Government. https://www.gov.uk/government/publications/the-digital-data-and-technology-playbook/the-digital-data-and-technology-playbook-html
UK Government. https://www.gov.uk/government/publications/data-ethics-framework/data-and-ai-ethics-framework
Ada Lovelace Institute. https://www.adalovelaceinstitute.org/report/buying-ai-procurement/
Ada Lovelace Institute. https://www.adalovelaceinstitute.org/report/spending-wisely-procurement/
Johnson, N., Silva, E., Leon, H., Eslami, M., Schwanke, B., Dotan, R., and Heidari, H. 2025. Legacy Procurement Practices Shape How U.S. Cities Govern AI: Understanding Government Employees’ Practices, Challenges, and Needs https://doi.org/10.1145/3715275.3732049 .


