top of page
Samson Lingampalli

Samson Lingampalli

Admin
More actions

Profile

Join date: Feb 10, 2026

Posts (20)

May 7, 20264 min
Observability is not governability
-By Dr Joanna Michalska Over the past several weeks, this series has traced a consistent gap. The gap between what organisations know and what they can do. Between signals that appear and decisions that follow. Between the oversight roles that exist and the intervention authority that is exercised. This time, the same gap appears in a different form. One that is growing more visible as AI governance investment accelerates. The gap between observing a system and being able to govern it. The...

0
0
May 1, 20266 min
Justification under uncertainty: the missing governance capability
-By Dr Joanna Michalska The last article’s argument was about escalation as a design problem. About the structural conditions that prevent concerns from travelling upward, and the credible intervention deficit that keeps people silent even when they know something is wrong. This week, the question moves to the moment after escalation. Assume the concern has reached the authority. A decision now has to be made. The system has not clearly failed. No threshold has been formally breached. The...

3
0
Apr 22, 20264 min
Who decides when the AI is wrong?
-By Dr Joanna Michalska When an AI system produces outputs that worry the people closest to it, but no threshold has formally been breached, who decides whether that is a problem worth acting on? This is where a quiet but consistent governance failure appears. Not the absence of escalation pathways, but the absence of defined intervention thresholds. The Borderline Problem AI systems do not typically fail in obvious ways; they drift. They produce outputs that are statistically defensible but...

5
0
bottom of page