Justification under uncertainty: the missing governance capability
- Samson Lingampalli
- May 1
- 6 min read

The last article’s argument was about escalation as a design problem. About the structural conditions that prevent concerns from travelling upward, and the credible intervention deficit that keeps people silent even when they know something is wrong.
This week, the question moves to the moment after escalation.
Assume the concern has reached the authority. A decision now has to be made.
The system has not clearly failed. No threshold has been formally breached. The outputs are borderline, the data is ambiguous, and the person with authority to act must decide whether to intervene.
Most governance frameworks have nothing to say about this moment.
They define who is responsible. They document what controls exist. They require that decisions be made.
What they do not do is prepare the decision-maker for the hardest part of the job: justifying a consequential call under genuine uncertainty, in real time, in a way that will withstand scrutiny after the fact.
The gap nobody names
There is a governance capability that is consistently absent from frameworks, training programmes, and board-level discussions.
Call it justification under uncertainty. The ability to make and record a defensible decision when the signal is ambiguous, the evidence is incomplete, and waiting carries its own risk.
This is not a technical problem. It is not an escalation problem. It is a leadership problem with structural dimensions that most organisations have not addressed.
The ISACA analysis published in February 2026 put it precisely: the real AI risk is when the wrong person gets a correct answer. The governance question is not only whether the right information reached the right person. It is whether that person was equipped and authorised to act on it in a way that could be justified and evidenced.
Those are different questions. Most governance investment addresses the first. Almost none address the second.
Why borderline decisions are the hardest
AI systems rarely produce obviously wrong outputs. They drift. They produce results that are statistically defensible but operationally troubling. Performance indicators remain within tolerance while experienced practitioners recognise something has shifted.
In these conditions, the decision to intervene is not a matter of reading a clear signal. It is a matter of exercising judgment under uncertainty — and doing so in a way that can be explained afterwards.
This is where organisations consistently struggle.
The WilmerHale and EqualAI AI Governance Playbook for Boards, published in January 2026, is explicit on the legal dimension: under the Delaware fiduciary standard, board members are now required to exercise informed judgment on AI-related decisions. Informed judgment is not the same as certainty. It requires a documented reasoning process that demonstrates the decision-maker considered the available evidence, weighed the relevant factors, and reached a conclusion that a reasonable person in that position could have reached.
That standard applies not only at the board level. It applies to any named decision-maker in an AI governance structure.
If an intervention decision is challenged after the fact, the question will not be whether the right person made it. The question will be whether they can show how they made it.
The accountability without reasoning problem
Most AI governance structures assign accountability clearly. A named person is responsible for a specific system or decision domain.
What they rarely do is define what constitutes an adequate justification for a borderline call.
This creates a structural gap. The person with authority knows they are accountable. They do not know what reasoning process is expected of them, what evidence they should document, or what standard they will be held to if the decision is challenged.
The result is one of two failure modes.
The first is paralysis. The decision-maker defers, escalates further, or waits for clearer evidence. By the time the signal is unambiguous, the window for intervention has passed, and harm has accumulated.
The second is improvisation. The decision is made on informal judgment, without documentation, without a structured reasoning process. The call may be correct. But it cannot be defended.
EC Council research published in February 2026 draws the distinction that matters here: performance metrics tell you what an AI system produced. Accountability metrics tell you whether the humans governing it can show how decisions were made. Most organisations measure the first. Almost none have built the second.
What justification under uncertainty actually requires
Three things. None of them is complicated in principle. All of them are consistently absent in practice.
A defined decision standard. Not a performance threshold, but a governance threshold. What level of concern, what pattern of outputs, what combination of signals is sufficient to justify intervention even in the absence of clear failure? This should be documented before the system goes live, not determined in the moment of pressure.
A structured reasoning record. When a borderline decision is made, the decision-maker should be able to record in plain language: what they observed, what evidence they weighed, what alternatives they considered, and why they reached the conclusion they did. This is not a bureaucratic exercise. It is the difference between a defensible decision and an improvised one.
A tested standard, not just a written one. The IMDA Model AI Governance Framework for Agentic AI, launched at Davos in January 2026, requires that human oversight capacity be operational, not merely documented. The same principle applies here. A justification standard that has never been exercised under pressure is not a standard. It is an intention.
The McDonald's lesson
In June 2024, McDonald's shut down its IBM AI drive-thru ordering pilot across 100 US locations after a series of failures that included orders reaching 260 chicken nuggets. The system was performing within its parameters. No threshold had been breached in any formal sense.
The decision to shut it down was the right call. But the more important governance question is this: what was the standard that justified that decision? Who made it, on what reasoning, and how was it recorded?
When AI systems operate within tolerance while producing outcomes that are operationally unacceptable, intervention depends entirely on someone's ability to make and justify a call that the system's own metrics would not support.
That is the governance capability most organisations have not built.
Here is the redrafted closing section only, replacing the three diagnostic questions and closing with the TSC anchor:
A diagnostic for boards
The Treasury Select Committee's April 2026 report into AI in financial services asked regulators directly whether they are doing enough. The responses from HMT, the Bank of England and the FCA were published on 16 April. The Committee's concern was pointed: accountability for systemic AI dependency remains unassigned. Boards of regulated firms cannot rely on the CTP regime to underwrite third-party AI risk. That accountability sits with them.
The same logic applies internally. If the regulator is asking who carries accountability when AI systems fail, the board needs to be able to answer that question at the level of individual decisions, not governance frameworks.
Three questions that reveal whether justification capability exists.
For the highest-impact AI system in your organisation: if a decision-maker chose to intervene today based on pattern recognition rather than threshold breach, what standard would justify that decision?
Is that standard documented and known to the person named as the intervention authority?
If that person were challenged on their decision six months from now, what record exists of the reasoning they applied?
If those questions produce unclear answers, the organisation has intervention authority on paper. It does not have a justification capability in practice.
Authority without justification capability is not governance.
It is exposure with documentation.
In your organisation, if someone with intervention authority made a borderline call today, what record would exist of how that decision was reached — and would it withstand scrutiny from a regulator, a board, or a court?
Part of a 12-week series on AI governance and organisational readiness.
Dr Joanna Michalska is the founder of Ethica Group Ltd, which advises boards and C-suite leaders on decision authority and governance architecture under automation.
Sources:
ISACA: AI Answers Are Becoming Business Decisions, February 2026 · https://www.isaca.org/resources/news-and-trends/newsletters/atisaca/2026/volume-3/ai-answers-are-becoming-business-decisions-most-organizations-arent-governing-them-that-way
WilmerHale / EqualAI: AI Governance Playbook for Boards, January 2026 · https://www.wilmerhale.com/en/insights/client-alerts/20260122-board-oversight-and-artificial-intelligence-key-governance-priorities-for-2026
EC Council: Board-Level Metrics for Measuring AI Accountability, February 2026 · https://www.eccouncil.org/cybersecurity-exchange/responsible-ai-governance/board-level-metrics-for-measuring-ai-accountability/
Singapore IMDA Model AI Governance Framework for Agentic AI, January 2026 · https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/press-releases/2026/new-model-ai-governance-framework-for-agentic-ai
McDonald's IBM drive-thru AI pilot shutdown, June 2024 · AI Incident Database · https://incidentdatabase.ai/
Treasury Select Committee — AI in Financial Services: Regulator Responses, 16 April 2026 · https://www.regulationtomorrow.com/2026/04/tsc-publishes-regulator-responses-to-its-report-on-ai-in-financial-services/



Comments