What “mature AI governance” actually looks like in practice
- Samson Lingampalli
- Apr 14
- 5 min read
-By Simon Shobrook

There’s a lot of talk about concepts, policies, and frameworks. But very little is actually deployed and proven in practice. The reality? The horse has already bolted.
By autumn 2023 (ancient history in the AI universe!), the National Audit Office had identified 74 AI use cases already deployed across surveyed government bodies. That’s a conservative baseline. Since then, Cabinet Office transparency publications under the Algorithmic Transparency Recording Standard (ATRS) show a growing number of live systems being declared, but as of today (April 2026), there is still no complete, consolidated government view, and what has been published remains partial and already out of date.
Beyond central government, the picture is the same. In the NHS, national programmes have already taken multiple AI technologies into real-world, multi-site deployment. In local government, around 95% of councils are now using or exploring AI.
Adoption is already widespread and accelerating. In practice, some departments report managing dozens of models at varying deployment lifecycle stages (although even at a department level, it's hard to confirm an accurate figure). Local government teams are developing and deploying their own AI services in-house.
So why don’t we have a clear number in April 2026?
Because there is no single, centralised view, AI systems are being developed, procured, and deployed across departments, ALBs, NHS bodies, and local authorities, often independently. Current transparency mechanisms rely on self-reporting, partial scope, and point-in-time snapshots.
The deployment of AI has outpaced the government’s capacity to track it. AI systems are live, evolving, and making decisions now. Mature AI governance isn’t what you write down. It’s what happens when the system is already in operation.
What the market is showing us
Across central government, local government and the NHS, a consistent pattern is emerging.
The message from the centre is shifting. This is no longer about defining principles — it’s about making AI work safely in live environments.
UK departments such as the Department for Business and Trade (DBT) have been openly discussing this since 2024, positioning governance as something that should enable delivery rather than act as a blocking approval layer.
DSIT and GDS are now publishing practical guidance (e.g. the AI Playbook) focused on implementation, not theory. The Local Government Association is highlighting that councils are ready to adopt AI but need operational capability and safeguards. The NHS and associated national bodies are focusing on real-world deployment, evaluation, and ongoing oversight.
The direction of travel is clear: AI governance is moving from concept to operation.
1. Governance is moving towards delivery, but it isn’t there yet
UK government bodies are starting to push governance closer to delivery, but in practice, most organisations are still operating with point-in-time controls.
The direction is right. The execution is not there yet.
What’s becoming clear, from the centre and regulators, is that governance has to exist:
before deployment (use case, risk, thresholds, approvals)
at deployment (controls, access, safeguards)
after deployment (continuous monitoring, drift, outcomes)
A one-off red teaming, a sign-off meeting, or a periodic review won’t cut the mustard.
A good real-world example of this shift is the MHRA’s AI Airlock programme. Rather than relying on one-off approvals, it focuses on testing AI in controlled real-world environments, with ongoing monitoring, evidence collection, and iteration as systems evolve. The emphasis is not on a single point of assurance, but on continuous oversight as the technology operates.
Similarly, NHS programmes such as the AI in Health and Care Award emphasise the need for real-world evaluation, information governance, and ongoing performance monitoring in deployment, not just pre‑deployment sign‑off.
2. Accountability is explicit and traceable
The ICO is clear in its AI governance guidance: organisations must be able to demonstrate accountability, including defined roles and responsibilities, documented decision-making, and evidence of oversight.
In practice, that means:
Someone owns the model
Someone owns the risk decision
Someone can explain why that threshold was set
And crucially, you can evidence it after the fact.
I was given a brilliant analogy by a superstar in this space: pharmaceuticals. They are rigorously tested before being approved for public use, but that’s not the end of governance. They are continuously monitored, reviewed, and regulated once in use to ensure ongoing safety. AI should be no different.
3. Real-time visibility replaces retrospective assurance
Many organisations still lack:
Clear accountability structures
Performance metrics
Mechanisms to track AI in operation
So governance becomes retrospective:
→ chasing evidence
→ reconstructing decisions
→ hoping nothing breaks
Mature governance flips that:
→ live monitoring
→ structured audit trails
→ evidence generated as the system runs
4. Governance is operational, not philosophical
Most AI risks don’t come from a lack of principles.
They come from weak operational controls.
This is consistent across the centre: the NAO has highlighted gaps in accountability and monitoring in live systems; DSIT/GDS guidance focuses on implementation and lifecycle control; and the ICO requires demonstrable, ongoing oversight, not one-off compliance.
What data can the model access?
Who changed the model version?
What happens when performance drifts?
These are operational questions, not ethics statements.
So what does “mature” actually look like?
In practice, it looks like:
A live register of AI systems, not a spreadsheet(e.g. departments publishing ATRS records of in‑scope algorithmic tools, maintained as living artefacts rather than static inventories)
Threshold-based decision rules, not vague risk statements(e.g. fraud detection or eligibility systems with explicit risk thresholds and escalation triggers, underpinned by observability: metrics and telemetry that track performance, risk, and outcomes in real time, rather than “low/medium/high risk” labels)
Named decision authority, not shared responsibility(e.g. clearly assigned model owners, SROs, and accountable sign‑off roles aligned to ICO expectations on accountability)
Continuous monitoring, not periodic review(e.g. MHRA AI Airlock-style real‑world monitoring, NHS deployments with ongoing performance and safety tracking post go‑live)
Evidence captured automatically, not manually assembled(e.g. system logs, audit trails, metrics and telemetry generated as models run, rather than reconstructed for audit after the fact)
The gap most organisations haven’t closed
AI governance is still not consistently operationalised at scale.
We don’t have a framework concept problem.
We have an execution problem.
What this means in practice
Mature AI governance is not about having the best framework.
It’s about being able to answer, in real time:
“What is this system doing right now — and who is accountable for it?”
If you can’t answer that quickly and confidently,
You don’t have governance.
You have documentation.
This is the gap RAITracker is designed to close.
Not as another framework, but as an operational platform — built through partnership with government, informed by academic research, and shaped by the people actually responsible for delivering AI safely in practice.
Independent, evidence-led, and grounded in real-world delivery, it brings together observability, metrics, and governance into a single view — so organisations can move from theory to control.
You can test AI models pre-deployment to select the most appropriate models.
Hold suppliers accountable to their AI claims through metrics, SLAs, auditability, and contractual obligations.
Set thresholds for tolerance to manage your risk, including hard-stop safeguards.
Monitor your models with deep observability, enabling timely human intervention when required.
Understand how your models work and guide suppliers or in-house teams to improve performance.
Have a governance system that safeguards the public, protects the organisation and responsible owners.



Comments