top of page


Observability is not governability
-By Dr Joanna Michalska Over the past several weeks, this series has traced a consistent gap. The gap between what organisations know and what they can do. Between signals that appear and decisions that follow. Between the oversight roles that exist and the intervention authority that is exercised. This time, the same gap appears in a different form. One that is growing more visible as AI governance investment accelerates. The gap between observing a system and being able to
Samson Lingampalli
6 days ago4 min read


Justification under uncertainty: the missing governance capability
-By Dr Joanna Michalska The last article’s argument was about escalation as a design problem. About the structural conditions that prevent concerns from travelling upward, and the credible intervention deficit that keeps people silent even when they know something is wrong. This week, the question moves to the moment after escalation. Assume the concern has reached the authority. A decision now has to be made. The system has not clearly failed. No threshold has been formally
Samson Lingampalli
May 16 min read


Who decides when the AI is wrong?
-By Dr Joanna Michalska When an AI system produces outputs that worry the people closest to it, but no threshold has formally been breached, who decides whether that is a problem worth acting on? This is where a quiet but consistent governance failure appears. Not the absence of escalation pathways, but the absence of defined intervention thresholds. The Borderline Problem AI systems do not typically fail in obvious ways; they drift. They produce outputs that are statistical
Samson Lingampalli
Apr 224 min read


What to watch in AI governance over the next 18 months.
-By Simon Shobrook Governance as a procurement gate The UK has no single AI law. The government has deliberately taken a sector-regulator approach, asking the ICO, FCA, CMA and others to apply existing frameworks to AI in their respective domains. But that doesn't mean procurement teams have no obligations. Cabinet Office guidance, the Algorithmic Transparency Recording Standard, and growing pressure from central government on audit-readiness are already shaping what gets
Samson Lingampalli
Apr 212 min read


What “mature AI governance” actually looks like in practice
-By Simon Shobrook There’s a lot of talk about concepts, policies, and frameworks. But very little is actually deployed and proven in practice. The reality? The horse has already bolted. By autumn 2023 (ancient history in the AI universe!), the National Audit Office had identified 74 AI use cases already deployed across surveyed government bodies. That’s a conservative baseline. Since then, Cabinet Office transparency publications under the Algorithmic Transparency Record
Samson Lingampalli
Apr 145 min read


Why thresholds matter more than accuracy scores.
-By Simon Shobrook Discussions of AI performance typically begin with accuracy. A model is described as “95% accurate” or as outperforming a benchmark. These statements are useful. They provide a high-level indication of model capability and allow for comparison across approaches. However, once an AI system is deployed into an operational environment, accuracy is no longer the primary determinant of outcomes. Consider pre-AI rule-based systems, such as highly accurate f
Samson Lingampalli
Apr 94 min read


The Gap Between detecting a problem and deciding what to do about it.
By Dr Joanna Michalska Last week, the argument was that governance fails not because people lack training or awareness, but because organisations are not structurally designed to convert signals into decisions. This week, I want to go even deeper into the question. Why does escalation break even when people find the courage to speak up? The credible intervention gap Here is the uncomfortable finding that sits underneath most escalation research. In regulated organisations,
Samson Lingampalli
Mar 305 min read


Building an AI governance framework that actually gets used
-By Simon Shobrook Governance frameworks often fail simply because they aren’t put into practice. Add AI into the mix, and things can quickly spiral out of control. Policies get written. Principles have been agreed upon. Committees get formed. And then reality happens. AI systems go live. Models change. Risks evolve. Suppliers update without visibility. And the “framework” sits in a document somewhere, disconnected from what’s actually happening. That’s the gap. If governa
Samson Lingampalli
Mar 241 min read


AI Governance is not a training problem - it is a design problem
By Dr Joanna Michalska Last week, I wrote about psychological safety in AI governance: that people must feel safe enough to raise concerns before escalation can begin. But safety alone is not enough. Even when someone raises a warning, the organisation still needs a structure capable of converting that signal into a decision. And this is where, across many institutions, governance quietly breaks and is often only recognised too late. Spending on governance. Not building it
Samson Lingampalli
Mar 233 min read


If speaking up isn't safe, is AI Governance real?
-By Dr Joanna Michalska Most organisations assume that if something goes wrong, someone will raise it. In practice, many people recognise something is wrong long before leadership does. They hesitate. They recheck the data. They wait for stronger evidence. Sometimes they say nothing at all. By the time the issue reaches the boardroom, it has already travelled through multiple layers of silence. Governance frameworks often assume escalation will occur naturally. In reality,
Samson Lingampalli
Mar 163 min read


The Procurement Trap: Buying AI You Can’t Actually Govern
-By Simon Shobrook There’s an uncomfortable truth emerging in public sector AI adoption. Some public bodies are buying AI systems they don’t actually control. They call it innovation. They call it transformation. But when scrutiny arrives, and it always does, they discover something worrying: They can’t explain how the system works. They can’t provide evidence of oversight. And they can’t intervene when something goes wrong. That’s not innovation. That’s operational risk. P
Samson Lingampalli
Mar 114 min read


When It Is Time to Stop: Override Is Not a Switch
-By Dr Joanna Michalska Over the past few weeks, I’ve been looking at what authority really means inside automated organisations. Responsibility grows. Escalation slows. Compliance calms fears. But none of that matters if, when it counts, the organisation can’t stop what it has set in motion. When something goes wrong, can it actually stop? Regulators no longer accept AI governance on paper alone. They want to know whether the intervention works in real time. That shift ex
Samson Lingampalli
Mar 43 min read


Compliance without Control: Why Intervention Capacity Defines AI Governance
-By Dr Joanna Michalska In recent weeks, I have examined how automation redistributes authority and how many organisations are not structured to act at the speed of their own systems. This week turns to whether governance that satisfies audit can actually deliver intervention when it matters. AI governance has entered a new phase. The European Union Artificial Intelligence Act is moving from principle toward enforcement, and U.S. states are expanding oversight capacity. Pu
Samson Lingampalli
Feb 274 min read


Why Every AI Ethics Framework Talks About the Same Things Differently
-By 🔐Rajeev Chakraborty Imagine four experts in a room. One from Brussels, one from Washington, one from London, one from a tech company. You ask them: What does "fairness" mean when an AI makes decisions about people? All four nod confidently. All four give completely different answers. This is AI ethics today. Not disagreement about whether fairness matters. Everyone agrees it matters. The problem is that nobody uses the same words to describe it, measure it, or govern i
Samson Lingampalli
Feb 262 min read


The Cost of Getting AI Governance Wrong (And Who Actually Pays for It)
-By 🔐Rajeev Chakraborty If Horizon taught us anything, it’s this: when complex systems are shielded from scrutiny, the consequences fall on people. AI only raises the stakes. Picture this: You're a subpostmaster in a small British village. You've run the local Post Office for fifteen years. Your neighbours trust you. You know their children's names. Then one Tuesday, the computer system shows you've lost £40,000. You haven't. But the screen says you have. And the organisa
Samson Lingampalli
Feb 245 min read


Common Responsible AI Misconceptions in Procurement
-By Simon Shobrook Picture a procurement meeting. A vendor is presenting their AI system. It screens applications, prioritises cases, and flags risks. The slides are polished. The demo is smooth. Someone asks: "Is this system responsible AI-compliant?" The vendor says yes. Everyone nods. The meeting moves on. But here's the problem. Nobody in that room agrees on what the question actually meant. And nobody agrees on what "yes" actually proved. This is where most AI procure
Samson Lingampalli
Feb 203 min read


Escalation lagging behind automation: why governance speed now matters
- By Dr Joanna Michalska Automation accelerates decisions. Governance, in many organisations, still moves at institutional speed. That gap is no longer theoretical. AI systems are now embedded across recruitment workflows, approval chains, fraud detection, case prioritisation, operational risk scoring, and vendor decisioning. They are no longer experimental projects. They are part of the operating infrastructure. They influence outcomes continuously, often without being ex
Samson Lingampalli
Feb 202 min read


What 'Visibility' Actually Means in AI Systems (And Why Logs are not the same as Oversight)
-By Simon Shobrook Every vendor will tell you their AI system has "full visibility". They'll show you dashboards. They'll mention logging. They'll point to audit trails. And technically, they're not lying. The system does record things. Data goes in, decisions come out, and somewhere in between, a log file captures what happened. But here's the question nobody asks: can you actually see what's going on? Because logging and visibility are not the same thing. And confusing th
Samson Lingampalli
Feb 193 min read


Responsibility without visibility: why leadership exposure has quietly increased
By Dr Joanna Michalska Most senior leaders are already accountable for decisions they did not personally make, cannot fully see end-to-end, and may struggle to explain when challenged. That has always been partly true in complex organisations. What has changed is how frequently AI now shapes decisions inside everyday operations, often without being experienced as a distinct system at all. If something goes wrong today, could you explain how the decision was made? Not the poli
Samson Lingampalli
Feb 163 min read


The Gap Between AI Ethics Frameworks and What's Actually Measurable
- By Rajeev Chakraborty Your organisation probably has an AI ethics framework by now. Maybe you adopted one from government guidance. Maybe a working group wrote one. Maybe you signed up for industry principles. They all say roughly the same thing: be fair, be transparent, be accountable. The problem isn't the principles. The principles are universal, well-intentioned, and almost entirely useless when you're trying to govern an AI system in production. You deploy an AI system
Samson Lingampalli
Feb 105 min read
bottom of page
