The Cost of Getting AI Governance Wrong (And Who Actually Pays for It)
- Samson Lingampalli
- Feb 24
- 5 min read

If Horizon taught us anything, it’s this: when complex systems are shielded from scrutiny, the consequences fall on people. AI only raises the stakes.
Picture this: You're a subpostmaster in a small British village. You've run the local Post Office for fifteen years. Your neighbours trust you. You know their children's names.
Then one Tuesday, the computer system shows you've lost £40,000.
You haven't. But the screen says you have. And the organisation behind that screen has more lawyers than you have family members.
What happens next destroys your life.
The Cost of Getting It Wrong
The Post Office Horizon scandal is now recognised as the UK's largest miscarriage of justice in history. The accounting software, built by Fujitsu and deployed in 1999, contained bugs that created phantom shortfalls in branch accounts. The Post Office knew about these problems. They pursued prosecutions anyway.
The numbers tell a story no business case ever predicted:
£700 million wasted on the original failed system
£246 million in compensation subsidies requested in February 2026
£104.4 million in additional tax liabilities from mismanaged contractors
Four separate compensation schemes are still trying to make people whole
The system cost roughly £1 billion to build and deploy. The cost of getting it wrong? Still being calculated, decades later.
The Human Cost of Getting It Wrong
736 subpostmasters were wrongfully convicted
At least 13 people took their own lives
The numbers reveal the scale of the failure. They cannot capture the human cost.
Who Writes the Cheques?
Here's where the story gets uncomfortable.
The people who made the decisions that caused this harm? Most faced no personal consequences for years. Paula Vennells, the CEO who continued prosecutions while sitting on the Church of England's Ethical Investment Advisory Group, kept her Commander of the British Empire honour until 2024.
The people who paid?
Subpostmasters lost their businesses, their homes, their marriages, their freedom. Some lost their lives.
Taxpayers are now funding the compensation. The Department for Business and Trade is covering £141.8 million for ongoing remediation and another £104.4 million for the Post Office's tax mistakes. That's your money and mine.
Fujitsu, the company that built the faulty system, has acknowledged a "moral obligation" to contribute. Twenty-six years after deployment, that contribution remains under discussion.
This pattern repeats across every AI governance failure. The gap between who decides, who profits, and who pays is where reputational and financial risk actually live.
The Pattern Emerging Across the Atlantic
The Post Office scandal was fundamentally a software accountability failure. The same pattern is now playing out with AI systems, but faster.
In the United States, a tutoring company called iTutorGroup programmed its AI recruitment software to automatically reject female applicants over 55 and male applicants over 60. What would have taken human recruiters years to accomplish in discriminatory hiring, their AI achieved in months.
The Equal Employment Opportunity Commission settled for $365,000 in 2023. That was the first AI hiring discrimination settlement. It won't be the last.
The Mobley v. Workday case achieved nationwide class action certification in May 2025. The plaintiffs are everyone over 40 who applied for jobs through Workday's AI screening system since September 2020 and got rejected. The federal court ruled that when AI performs functions traditionally handled by employees, the vendor becomes an "agent" of the employer, liable for discriminatory outcomes.
The judge's warning should be pinned to every boardroom wall: "Drawing an artificial distinction between software decisionmakers and human decisionmakers would potentially gut anti-discrimination laws in the modern era."
The Financial Stakes Are Quantifiable
Let's talk about money, because that's often what gets attention when ethics don't.
Direct litigation costs:
AI bias audits: £40,000 to £160,000 per system
Algorithm remediation: £80,000 or more to fix biased systems
Settlement costs: Rising with each case
Regulatory penalties now in force:
Texas Responsible AI Governance Act: £8,000 to £160,000 per violation, plus £1,600 to £32,000 per day for continuing violations
Colorado AI Act (effective February 2026): Comprehensive liability for "high-risk" AI decisions
Illinois Human Rights Act amendment: Private right of action for AI discrimination in employment
Insurance gaps: Between 50% and 75% of companies now use AI operationally. Insurers have noticed. They're adding AI-specific exclusions to policies. When the Workday lawsuit hits judgment, many employers may discover their coverage has a hole in it precisely where they need protection most.
The Reputational Cost Nobody Models
Financial exposure you can at least attempt to quantify. Reputational damage is harder.
The Post Office brand took decades to build. It took less than a decade of mismanaged technology and institutional denial to destroy public trust. A 2024 ITV drama, Mr Bates vs The Post Office, did more to communicate the scandal than years of legal proceedings.
When Texas settled with Pieces Technologies in September 2024 over misleading claims about its AI healthcare product, the first-of-its-kind state Attorney General action sent a message to every health technology vendor: regulators are watching, and they're not impressed by marketing claims that your AI can't substantiate.
The Consumer Financial Protection Bureau has been explicit: "If firms cannot manage using a new technology in a lawful way, then they should not use the technology."
That's not a subtle hint.
What Continuous Monitoring Actually Prevents
Every case I've described shares a common feature: the harm accumulated over time because nobody was measuring what the system was actually doing.
The Horizon system flagged phantom shortfalls for years. Nobody tracked the pattern. Nobody asked why so many previously reliable branch managers were suddenly appearing to commit fraud.
AI hiring tools rejected protected groups at higher rates. Nobody ran the four-fifths rule calculations that would have shown disparate impact immediately.
Healthcare AI made claims about accuracy that couldn't be verified. Nobody required ongoing performance metrics against real patient outcomes.
Continuous monitoring isn't a nice-to-have. It's the difference between catching a problem in week two and defending a class action in year five.
What does this look like in practice?
Tracking false positive rates by demographic group
Monitoring model drift as data distributions change
Measuring threshold calibration against actual outcomes
Documenting alert escalation and resolution
Maintaining audit trails that prove you knew what your system was doing
The organisations that can demonstrate this monitoring will have a defence. The organisations that can't will have a liability.
The Uncomfortable Question
Here's what boards should be asking: If our AI system makes a decision that destroys someone's life, can we explain why it made that decision?
Not "can our vendor explain it." Can we?
If the answer is no, you have a governance gap. That gap has a cost. The only question is when you'll pay it, and whether you'll have any choice about the terms.
The Post Office couldn't explain why Horizon said subpostmasters were stealing. They prosecuted anyway.
Workday couldn't explain why its AI recommended against certain candidates. Employers relied on those recommendations anyway.
The pattern is the same. The cost is paid by people who never made the decision.
Making Risk Visible Before It Becomes Damage
RAI Tracker exists because this problem is structural, not anecdotal.
You can't govern what you can't measure. You can't measure what you're not monitoring. And you can't monitor effectively without infrastructure built for that purpose.
The organisations getting this right aren't the ones with the best ethics policies. They're the ones with systems that translate policy into metrics, metrics into alerts, and alerts into action.
That's not theoretical. That's operational.
The cost of getting AI governance wrong is now documented in court filings, settlement agreements, and compensation schemes running into the billions. The cost of getting it right is a fraction of that.
The only question is who in your organisation is responsible for making sure you're on the right side of that calculation.
Over the next 12 weeks, I'm explaining RAI monitoring in plain English - what it is, why it matters, and how it works in practice. All free. Follow our page.
#ResponsibleAI #DigitalGovernment #PublicSector #ResponsibleAITracker #RAITracker #RAIT #RAITFramework



Comments