AI Governance is not a training problem - it is a design problem
- Samson Lingampalli
- Mar 23
- 3 min read

Last week, I wrote about psychological safety in AI governance: that people must feel safe enough to raise concerns before escalation can begin.
But safety alone is not enough.
Even when someone raises a warning, the organisation still needs a structure capable of converting that signal into a decision. And this is where, across many institutions, governance quietly breaks and is often only recognised too late.
Spending on governance. Not building it.
McKinsey's 2025 State of AI report found that only 6% of organisations qualify as AI high performers. Two-thirds have not begun scaling AI across the enterprise. Adoption is near-universal. Capability is not.
The pattern is consistently showing current focus is on workshops, policies, frameworks … People sometimes understand what good governance should look like. Yet when something goes wrong, escalation stalls.
This is the gap between awareness and capability. Training creates understanding. Capability is only revealed when the organisation has to act under pressure.
The ownership trap
Harvard Business Review captured it precisely this month. A Fortune 500 insurance CEO convened his senior team to discuss who owns AI. The CIO, COO, CFO, CRO, CHRO and Chief Data Officer each had a legitimate claim. Every function. Every hand raised.
The result was not clarity. It was fragmentation.
This is the ownership trap. Organisations invest considerable effort in defining who owns AI. Far fewer define who can intervene, under what conditions, with what authority, without delay.
Ownership is a governance question. Intervention capacity is an operational one. Confusing the two leaves signals with nowhere to go.
Gartner's June 2025 research points to the consequences: over 40% of agentic AI projects will be cancelled by 2027, citing inadequate risk controls as a primary cause. Not technology but governance failure.
Amazon's March: a structural lesson, not a technical one
Amazon mandated 80% of its engineers use its AI coding tool Kiro weekly. In the months that followed, two outages wiped out 6.3 million orders across North American marketplaces.
The individual failures were identified and resolved each time. But the structural pattern, a trend of AI-assisted incidents running from Q3 2025 through March 2026, took six months to trigger a governance response.
That lag, between recurring detection and controlled intervention, is the gap that matters.
The 2025 DORA report, drawing on nearly 5,000 technology professionals globally, confirmed the broader picture: AI adoption continues to have a negative relationship with software delivery stability. Speed of deployment is outrunning the capacity to govern what gets deployed.
Amazon's response, introducing mandatory senior sign-off and what their SVP called "controlled friction", is the right instinct. It arrived six months too late.
What genuinely ready organisations seem to share
The organisations that appear to govern AI effectively tend to have three structural features that most governance programmes do not build:
Named intervention authority. Not who owns AI broadly, but who can halt a deployment or override an automated decision, documented as precisely as any escalation matrix in a regulated environment.
A tested signal pathway. The escalation route from operational detection to executive decision must be exercised before it is needed. A pathway that has never been used under pressure is not a pathway. It is a diagram.
Deliberate friction on high-risk decisions. The goal is not to slow everything down. It is to ensure that unreviewed change does not scale.
The organisations that will govern AI well are not those with the most comprehensive training catalogues.
They are the ones who have answered a harder question: when a signal appears at 11 pm on a Thursday, who has the authority to act, and does the structure know how to reach them?
That question is not answered in a workshop but in the design of the organisation itself.
What does escalation capacity look like where you are right now: formal structure, informal networks, or something still being figured out? Genuinely curious what others are seeing.
Part of a 12-week series on AI governance and organisational readiness.
Dr Joanna Michalska is the founder of Ethica Group Ltd, which advises boards and C-suite leaders on decision authority and governance architecture under automation.



Comments