top of page


The Gap Between detecting a problem and deciding what to do about it.
By Dr Joanna Michalska Last week, the argument was that governance fails not because people lack training or awareness, but because organisations are not structurally designed to convert signals into decisions. This week, I want to go even deeper into the question. Why does escalation break even when people find the courage to speak up? The credible intervention gap Here is the uncomfortable finding that sits underneath most escalation research. In regulated organisations,
Samson Lingampalli
Mar 305 min read


Guarding the Guards: What Welfare AI Reveals About the UK’s Regulatory Blind Spot
Did you know that: Under (Chapter II, Art. 5) EU AI Act, which most AI service providers comply with, are prohibited from: - "social scoring, i.e., evaluating or classifying individuals or groups based on social behaviour or personal traits, causing detrimental or unfavourable treatment of those people". Computer Weekly article "DWP 'fairness analysis' revealed bias in AI fraud detection system" which states: "the assessment showed there is a 'statistically significant refer
shashikantsingh090
Dec 26, 20252 min read


Responsible AI: Less Aspiration, More Operation
It’s easy to say “we want ethical AI.” It’s harder to operationalise it. Looking at assurance checklists, a few themes always stand out: • User access management: Who can touch training data, and how often are permissions reviewed? • Vulnerability management cadence: Are model security checks monthly, quarterly, or ad-hoc? • Explainability logging: Is every model decision traceable back to inputs and assumptions? • Sustainability: Do we measure compute costs and energy us
shashikantsingh090
Dec 24, 20251 min read
bottom of page
