Observability is not governability
- Samson Lingampalli
- 6 days ago
- 4 min read

Over the past several weeks, this series has traced a consistent gap.
The gap between what organisations know and what they can do.
Between signals that appear and decisions that follow. Between the oversight roles that exist and the intervention authority that is exercised.
This time, the same gap appears in a different form. One that is growing more visible as AI governance investment accelerates.
The gap between observing a system and being able to govern it.
The observability assumption
Investment in AI monitoring is increasing.
Dashboards that track model performance. Drift detection tools. Output consistency monitoring. Anomaly flagging. Real-time alert systems that surface when a model begins behaving differently from its baseline.
These capabilities are genuinely useful. They represent real progress in making AI systems visible.
But they carry an assumption worth examining carefully.
The assumption is that visibility creates control. If leadership can see what an AI system is producing, governance is happening.
It is not.
Seeing a problem and being able to act on it are different organisational capabilities. The gap between them is exactly where most AI governance failures actually live.
The 2025 DORA report, drawing on nearly 5,000 technology professionals globally, found that AI adoption continues to have a negative relationship with software delivery stability. The report's central finding is that AI amplifies what is already there. If the underlying governance architecture is weak, better dashboards make the weakness more visible. They do not resolve it.
What observability actually provides
Observability answers: is performance drifting? Are outputs consistent? Are there anomalies worth investigating?
These are necessary questions. But they are not governance questions.
Governance questions are different.
Who is responsible for deciding whether what the dashboard shows constitutes a problem? What is the threshold at which an anomaly becomes an escalation obligation? Who can act on that escalation, with what authority, within what time frame?
A monitoring tool can surface a signal. It cannot answer any of those questions.
The International AI Safety Report 2025, authored by 96 international AI experts and backed by 30 governments, draws this distinction explicitly. It identifies human oversight capacity as a foundational requirement for safe AI deployment and distinguishes it clearly from monitoring capability. Oversight requires authority. Monitoring capability alone does not provide it.
The Air Canada lesson
In 2024, Air Canada's AI chatbot invented a bereavement fare discount policy that did not exist. A customer booked travel based on the chatbot's advice. Air Canada initially argued the chatbot was a separate legal entity not responsible for its own statements. The tribunal ruled otherwise. Air Canada was held liable.
The case is frequently discussed as an AI accountability failure.
It is more precisely an observability gap.
Someone at Air Canada almost certainly had access to logs showing what the chatbot was saying to customers. The technical capability to see the system's outputs existed.
What was absent was a governance structure that connected those outputs to a named person with authority to review, question, and intervene when what the system was saying diverged from company policy.
Observability without governability produced liability without warning.
Where the gap sits in practice
Three structural conditions create the distance between observability and governability in most organisations.
Metrics without owners. Dashboards report performance data to governance functions that do not have defined authority to act on what they see. The metric exists. The mandate to respond does not.
Thresholds without obligations. Alert systems flag anomalies. But an alert reaching an inbox is not the same as an obligation to escalate being attached to that alert. When what happens after the alert is unclear, alerts become noise.
Visibility without intervention capacity. A governance body receives detailed reporting on AI system performance and has no defined mechanism to pause, adjust or halt a system based on what the reporting shows. Oversight exists. Control does not.
ISACA's February 2026 guidance frames this directly. Escalation paths for high-risk use cases must exist as operational conditions, not procedural aspirations. Business leaders must retain accountability for AI-enabled decisions. That accountability requires more than a dashboard. It requires the authority and mechanism to act on what the dashboard shows.
What governability actually requires
Observability answers: what is the system doing?
Governability answers: who can change what the system does, under what conditions, within what time frame?
An organisation with strong observability and weak governability will see problems clearly. It will not be able to respond to them decisively.
Gartner's June 2025 research projects that over 40% of agentic AI projects will be cancelled by 2027, citing inadequate risk controls as a primary cause. Investment in monitoring tools without corresponding investment in intervention authority is a precise description of inadequate risk controls in practice.
Three questions test whether observability has translated into governability.
For each AI system with monitoring in place: when the dashboard shows an anomaly, who specifically is obligated to respond to it?
Does that person have defined authority to act on what they see, or do they need to seek consensus before taking any action?
If they escalate, does an intervention mechanism exist that can change the system's behaviour within a time frame that matters?
If any of those questions produce an unclear answer, the organisation has invested in seeing. It has not yet invested in governing.
There is a version of AI governance that looks impressive from the outside.
Dashboards. Monitoring tools. Oversight committees. Risk reporting. Audit trails.
All of that is genuinely useful.
None of it tells you whether the right person will act on what it shows.
The organisations that govern AI well are not the ones with the most dashboards.
They are the ones that have connected what they see to someone who can act on it.
In your organisation, when the AI monitoring dashboard shows something concerning, is there a named person with a defined obligation and the authority to respond, or does the concern enter a review process with no clear endpoint?
Part of a 12-week series on AI governance and organisational readiness.
Dr Joanna Michalska is the founder of Ethica Group Ltd, which advises boards and C-suite leaders on decision authority and governance architecture under automation.
Sources
Google DORA 2025 · https://dora.dev/research/2025/dora-report/
International AI Safety Report 2025
ISACA February 2026 · https://www.isaca.org/resources/news-and-trends/isaca-now-blog/2026/responsible-ai-from-emerging-technology-to-executive-governance-imperative ·
Gartner June 2025 · https://www.gartner.com/en/newsroom/press-releases/2025-06-25-gartner-predicts-over-40-percent-of-agentic-ai-projects-will-be-canceled-by-end-of-2027



Comments