The Procurement Trap: Buying AI You Can’t Actually Govern
- Samson Lingampalli
- Mar 11
- 4 min read
-By Simon Shobrook

There’s an uncomfortable truth emerging in public sector AI adoption.
Some public bodies are buying AI systems they don’t actually control.
They call it innovation. They call it transformation.
But when scrutiny arrives, and it always does, they discover something worrying:
They can’t explain how the system works. They can’t provide evidence of oversight. And they can’t intervene when something goes wrong.
That’s not innovation.
That’s operational risk.
Procurement is Moving Faster Than Governance
Across government, AI procurement is accelerating.
The drivers are familiar:
pressure to reduce costs
Demand for faster services
expectations that AI will transform productivity
'AI first' appetite
And suppliers are responding with increasingly sophisticated products.
The demonstrations look impressive. The potential savings look compelling. Responsible AI claims are prominent in marketing and proposals.
But too often, the harder governance questions are left until after the contract is signed.
Questions like:
Who is monitoring live model performance?
Who is testing for bias or drift over time?
Who can pause or override automated decisions?
Who owns the audit trail of system behaviour?
If the answer to these questions is "the supplier manages that, something important has happened.
Governance has been outsourced.
And with AI, that creates a dangerous dependency.
Accountability in the UK Public Sector Doesn’t Move to the Vendor
Under UK public sector governance, accountability for technology decisions never leaves the organisation.
It sits with the Senior Responsible Owner (SRO). It sits with the Accounting Officer. Ultimately, it sits with the Permanent Secretary or Chief Executive.
When automated decisions affect the public, whether that’s housing allocation, fraud detection, case prioritisation or benefits processing, public bodies must be able to demonstrate:
transparency of decision logic
Ongoing oversight of system behaviour
evidence of fairness and equality impact monitoring
the ability to intervene quickly if outcomes become problematic
These expectations are reflected across multiple policy and regulatory frameworks, including:
DSIT guidance on responsible AI adoption
Cabinet Office AI procurement guidance
UK GDPR and the Data Protection Act 2018
The Public Sector Equality Duty
Algorithmic transparency expectations are emerging across regulators
Together, these frameworks point toward a simple reality:
Having a contract that says a supplier operates responsibly is not enough.
Public bodies must be able to demonstrate that oversight themselves.
The Real Governance Gap
What we are seeing in many organisations is not a lack of awareness.
Public sector leaders know the risks.
They are being bombarded with guidance, frameworks and policy documents from multiple directions.
The challenge is something different:
Turning governance principles into operational capability.
Most organisations do not yet have the tools to:
observe how AI systems behave once they are live
Monitor performance and bias signals over time
track changes to models, data or thresholds
evidence intervention decisions during audit or scrutiny
maintain independent oversight of vendor systems
Without that capability, governance becomes largely theoretical.
Frameworks exist. Policies exist.
But real-time oversight does not.
Governance Requires Observability
In mature digital systems, observability is standard practice.
We monitor uptime, performance, security events and system behaviour continuously.
AI systems require the same level of visibility.
In fact, they require more.
Because AI systems are not static software.
They learn, adapt, drift and behave differently depending on the data they encounter.
This means governance must move beyond documentation and toward live operational insight.
Public bodies need to be able to answer simple but critical questions at any time:
What is the system doing today?
Is performance changing?
Are outcomes shifting for different groups?
Can we intervene quickly if necessary?
Can we evidence our oversight during an audit or investigation?
This is the Problem RAITracker Was Built to solve.
RAITracker (Responsible AI Tracker) was developed specifically to address this governance gap.
It gives organisations independent oversight of AI systems, including those supplied by external vendors.
Instead of relying solely on supplier assurances, public bodies gain the ability to:
See how systems behave in practice RAITracker provides deep observability into AI systems once they are operational.
Monitor risks continuously Model performance, fairness signals and system behaviour can be tracked over time.
Maintain independent oversight of vendors Governance does not rely solely on supplier reporting.
Evidence-based AI governance Audit trails and monitoring records help organisations demonstrate oversight to regulators, auditors and parliamentary scrutiny.
Intervene when necessary Organisations retain the ability to challenge, pause or review automated decisions when risk thresholds are reached.
In other words:
RAITracker helps public bodies move from trusting AI suppliers to governing AI systems.
Responsible AI Requires Operational Capability
Responsible AI cannot be achieved through procurement clauses alone.
Nor can it be delivered purely through governance frameworks.
It requires operational capability.
The ability to observe, challenge and intervene.
Because once an AI system is live and influencing decisions about people’s lives, the question regulators and the public will ask is simple:
Who was watching the system?
And more importantly:
What did they do when something changed?
Before Buying AI, Ask Three Questions
Before any public body deploys AI into a critical service, three governance questions should be answered clearly:
Can we see how the system behaves? Can we challenge its outcomes? Can we stop it if necessary?
If the answer to any of those questions is unclear, the organisation is not procuring innovation.
It is procuring risk.
And in the public sector, that risk ultimately sits with the organisation itself.



Comments