top of page


Observability is not governability
-By Dr Joanna Michalska Over the past several weeks, this series has traced a consistent gap. The gap between what organisations know and what they can do. Between signals that appear and decisions that follow. Between the oversight roles that exist and the intervention authority that is exercised. This time, the same gap appears in a different form. One that is growing more visible as AI governance investment accelerates. The gap between observing a system and being able to
Samson Lingampalli
6 days ago4 min read


Justification under uncertainty: the missing governance capability
-By Dr Joanna Michalska The last article’s argument was about escalation as a design problem. About the structural conditions that prevent concerns from travelling upward, and the credible intervention deficit that keeps people silent even when they know something is wrong. This week, the question moves to the moment after escalation. Assume the concern has reached the authority. A decision now has to be made. The system has not clearly failed. No threshold has been formally
Samson Lingampalli
May 16 min read


Who decides when the AI is wrong?
-By Dr Joanna Michalska When an AI system produces outputs that worry the people closest to it, but no threshold has formally been breached, who decides whether that is a problem worth acting on? This is where a quiet but consistent governance failure appears. Not the absence of escalation pathways, but the absence of defined intervention thresholds. The Borderline Problem AI systems do not typically fail in obvious ways; they drift. They produce outputs that are statistical
Samson Lingampalli
Apr 224 min read


What to watch in AI governance over the next 18 months.
-By Simon Shobrook Governance as a procurement gate The UK has no single AI law. The government has deliberately taken a sector-regulator approach, asking the ICO, FCA, CMA and others to apply existing frameworks to AI in their respective domains. But that doesn't mean procurement teams have no obligations. Cabinet Office guidance, the Algorithmic Transparency Recording Standard, and growing pressure from central government on audit-readiness are already shaping what gets
Samson Lingampalli
Apr 212 min read


What “mature AI governance” actually looks like in practice
-By Simon Shobrook There’s a lot of talk about concepts, policies, and frameworks. But very little is actually deployed and proven in practice. The reality? The horse has already bolted. By autumn 2023 (ancient history in the AI universe!), the National Audit Office had identified 74 AI use cases already deployed across surveyed government bodies. That’s a conservative baseline. Since then, Cabinet Office transparency publications under the Algorithmic Transparency Record
Samson Lingampalli
Apr 145 min read


Why thresholds matter more than accuracy scores.
-By Simon Shobrook Discussions of AI performance typically begin with accuracy. A model is described as “95% accurate” or as outperforming a benchmark. These statements are useful. They provide a high-level indication of model capability and allow for comparison across approaches. However, once an AI system is deployed into an operational environment, accuracy is no longer the primary determinant of outcomes. Consider pre-AI rule-based systems, such as highly accurate f
Samson Lingampalli
Apr 94 min read


Building an AI governance framework that actually gets used
-By Simon Shobrook Governance frameworks often fail simply because they aren’t put into practice. Add AI into the mix, and things can quickly spiral out of control. Policies get written. Principles have been agreed upon. Committees get formed. And then reality happens. AI systems go live. Models change. Risks evolve. Suppliers update without visibility. And the “framework” sits in a document somewhere, disconnected from what’s actually happening. That’s the gap. If governa
Samson Lingampalli
Mar 241 min read


Compliance without Control: Why Intervention Capacity Defines AI Governance
-By Dr Joanna Michalska In recent weeks, I have examined how automation redistributes authority and how many organisations are not structured to act at the speed of their own systems. This week turns to whether governance that satisfies audit can actually deliver intervention when it matters. AI governance has entered a new phase. The European Union Artificial Intelligence Act is moving from principle toward enforcement, and U.S. states are expanding oversight capacity. Pu
Samson Lingampalli
Feb 274 min read


Why Every AI Ethics Framework Talks About the Same Things Differently
-By 🔐Rajeev Chakraborty Imagine four experts in a room. One from Brussels, one from Washington, one from London, one from a tech company. You ask them: What does "fairness" mean when an AI makes decisions about people? All four nod confidently. All four give completely different answers. This is AI ethics today. Not disagreement about whether fairness matters. Everyone agrees it matters. The problem is that nobody uses the same words to describe it, measure it, or govern i
Samson Lingampalli
Feb 262 min read


The Cost of Getting AI Governance Wrong (And Who Actually Pays for It)
-By 🔐Rajeev Chakraborty If Horizon taught us anything, it’s this: when complex systems are shielded from scrutiny, the consequences fall on people. AI only raises the stakes. Picture this: You're a subpostmaster in a small British village. You've run the local Post Office for fifteen years. Your neighbours trust you. You know their children's names. Then one Tuesday, the computer system shows you've lost £40,000. You haven't. But the screen says you have. And the organisa
Samson Lingampalli
Feb 245 min read


Common Responsible AI Misconceptions in Procurement
-By Simon Shobrook Picture a procurement meeting. A vendor is presenting their AI system. It screens applications, prioritises cases, and flags risks. The slides are polished. The demo is smooth. Someone asks: "Is this system responsible AI-compliant?" The vendor says yes. Everyone nods. The meeting moves on. But here's the problem. Nobody in that room agrees on what the question actually meant. And nobody agrees on what "yes" actually proved. This is where most AI procure
Samson Lingampalli
Feb 203 min read


What 'Visibility' Actually Means in AI Systems (And Why Logs are not the same as Oversight)
-By Simon Shobrook Every vendor will tell you their AI system has "full visibility". They'll show you dashboards. They'll mention logging. They'll point to audit trails. And technically, they're not lying. The system does record things. Data goes in, decisions come out, and somewhere in between, a log file captures what happened. But here's the question nobody asks: can you actually see what's going on? Because logging and visibility are not the same thing. And confusing th
Samson Lingampalli
Feb 193 min read


The Gap Between AI Ethics Frameworks and What's Actually Measurable
- By Rajeev Chakraborty Your organisation probably has an AI ethics framework by now. Maybe you adopted one from government guidance. Maybe a working group wrote one. Maybe you signed up for industry principles. They all say roughly the same thing: be fair, be transparent, be accountable. The problem isn't the principles. The principles are universal, well-intentioned, and almost entirely useless when you're trying to govern an AI system in production. You deploy an AI system
Samson Lingampalli
Feb 105 min read


While Everyone Panics About the EU AI Act, Smart Companies Are Getting Ahead"
Remember when GDPR hit and everyone panicked? Well, here we go again. The EU AI Act is live, and honestly? Most companies I talk to are flying blind. Here's the thing - while your competitors are scrambling to figure out what "high-risk AI systems" even means, there's a massive opportunity sitting right in front of you. I've been working with companies across Europe and the UK, and the ones getting this right aren't just checking compliance boxes. They're actually turning AI
shashikantsingh090
Dec 29, 20252 min read


Guarding the Guards: What Welfare AI Reveals About the UK’s Regulatory Blind Spot
Did you know that: Under (Chapter II, Art. 5) EU AI Act, which most AI service providers comply with, are prohibited from: - "social scoring, i.e., evaluating or classifying individuals or groups based on social behaviour or personal traits, causing detrimental or unfavourable treatment of those people". Computer Weekly article "DWP 'fairness analysis' revealed bias in AI fraud detection system" which states: "the assessment showed there is a 'statistically significant refer
shashikantsingh090
Dec 26, 20252 min read


Responsible AI: Less Aspiration, More Operation
It’s easy to say “we want ethical AI.” It’s harder to operationalise it. Looking at assurance checklists, a few themes always stand out: • User access management: Who can touch training data, and how often are permissions reviewed? • Vulnerability management cadence: Are model security checks monthly, quarterly, or ad-hoc? • Explainability logging: Is every model decision traceable back to inputs and assumptions? • Sustainability: Do we measure compute costs and energy us
shashikantsingh090
Dec 24, 20251 min read


The Complex Relationship Between AI Investments and Human Needs in a Rapidly Evolving World
The technology has already woven itself into our lives. The only question is which companies survive the journey. Is AI in a bubble? Probably. Will it crash? Quite possibly. Does any of that matter for AI's long-term trajectory? Not really. Even Sam Altman, the CEO of OpenAI, has acknowledged that investors are "overexcited" about AI. The Bank of England has warned about the risks of a global market correction. Apollo Global Management's chief economist has stated that the cu
rajeev385
Dec 9, 20254 min read
bottom of page
