When It Is Time to Stop: Override Is Not a Switch
- Samson Lingampalli
- Mar 4
- 3 min read

Over the past few weeks, I’ve been looking at what authority really means inside automated organisations. Responsibility grows. Escalation slows. Compliance calms fears.
But none of that matters if, when it counts, the organisation can’t stop what it has set in motion.
When something goes wrong, can it actually stop?
Regulators no longer accept AI governance on paper alone. They want to know whether the intervention works in real time. That shift exposes a gap most boards haven’t deliberately tested.
Anchoring the Risk
In 2024, a routine security update from CrowdStrike triggered one of the largest global IT outages in recent years. Airlines were grounded. Hospitals and financial institutions were disrupted. Systems that appeared stable hours earlier became inoperable at scale.
The systems were monitored, the update was authorised, and the infrastructure was considered resilient. Nothing about it signalled recklessness.
What failed was interruptibility.
Once the faulty update propagated, halting it required manual intervention across millions of endpoints. Recovery was slow, not because organisations lacked governance frameworks, but because automation had been designed for scale, not for immediate containment.
The outage was technical. The inability to halt it quickly was structural. The financial impact ran into the billions
The lesson for boards is not about cybersecurity alone.
If systems move at machine speed, authority must as well.
Override as a governance condition
Override is often described as if it were a technical switch. It is not. In reality, it is a governance condition: clarity that a specific role is empowered to pause a system, on defined grounds, within a defined time. Supervisors are increasingly emphasising that contingency planning, fallback options, and defined disconnection criteria are core components of AI risk management.
In many organisations, that condition has not been deliberately designed. Intervention stalls for predictable reasons. Authority is delegated rather than clearly owned. Cross-functional agreement becomes a prerequisite. No predefined trigger converts “this looks wrong” into decisive action. The perceived operational or reputational cost of stopping quietly biases decisions toward keeping systems running, even when thresholds are breached. None of this appears in audit records. It only appears when pressure rises.
The Override Readiness Test
Boards do not need technical detail to test whether the override is real. They need clarity on three variables for at least one high-impact automated workflow:
Who is the owner of the stop rights? Name a specific position, not a committee.
What predetermined events justify intervention? Replace “if something feels wrong” with explicit limits and thresholds.
What is the expected time to intervention, from detection of a credible concern to authorised pause?
This is not a complicated technical exercise; it is a governance one. If any answer depends on informal escalation, “picking up the phone to Legal,” or waiting for a committee, override is conditional. Conditional authority becomes more fragile as scale and complexity increase.
Time to intervene is measurable, yet almost no board sees a simple metric for “intervention latency,” even though they routinely receive dashboards for performance and risk. In automated enterprises, intervention latency is not a technical statistic. It is a measure of authority. A practical question for the board is: how long would it take, in practice, to pause a live automated system once a credible concern is raised? This does not require new technology. It requires clarity of authority, thresholds, and process. Supervisory bodies are already highlighting the need for defined disconnection criteria and viable fallback plans as part of effective AI risk management.
Running a structured scenario exercise once a year, timed, documented, and focused on a real workflow, will reveal more about governance capability than most documentation reviews.
Why this matters now
Automation is moving into core decision processes, but accountability remains with named executives and boards. Systems may operate autonomously. Liability does not.
Regulatory scrutiny is tightening. Supervisory authorities are signalling that defined disconnection criteria and viable fallback mechanisms are becoming enforceable expectations.
Exposure builds quietly when the intervention authority is ambiguous or slow.
Override is not proven when systems function. It is proven that someone can stop them quickly, under stress.
If your board cannot state, within a minute, who can stop a system within thirty minutes and under what conditions, governance may exist.
If control is not deliberately designed, responsibility will find its way to the boardroom anyway.



Comments