If speaking up isn't safe, is AI Governance real?
- Samson Lingampalli
- Mar 16
- 3 min read

Most organisations assume that if something goes wrong, someone will raise it.
In practice, many people recognise something is wrong long before leadership does. They hesitate. They recheck the data. They wait for stronger evidence. Sometimes they say nothing at all.
By the time the issue reaches the boardroom, it has already travelled through multiple layers of silence.
Governance frameworks often assume escalation will occur naturally. In reality, escalation only happens where it is safe to challenge the system.
When that safety is missing, governance fails quietly.
Automation changes the escalation dynamic
Automated systems increasingly influence decisions across industries.
They operate continuously and at scale. Small anomalies can propagate across thousands of outcomes before anyone notices.
Governance frameworks attempt to manage this risk through controls, oversight roles, and escalation processes. Yet these mechanisms depend on something rarely discussed in governance design.
Someone must be willing to raise the concern. If the environment discourages challenge, governance becomes performative rather than protective.
Psychological safety: the missing control
Psychological safety is often framed as a cultural aspiration.
In automated organisations, it becomes a structural requirement.
Escalation depends on three conditions:
-People must recognise when outcomes look wrong
-They must believe their concern will be taken seriously
-They must believe raising it will not damage their position
When these conditions are absent, signals remain local. Problems move through systems faster than warnings move through organisations.
Governance frameworks cannot function if the escalation pathway is socially blocked.
Research increasingly confirms what many leaders observe in practice.
The American Psychological Association’s 2024 Work in America Survey, involving more than 2,000 employees, found a clear relationship between psychological safety and how people respond to AI at work.
Employees who reported lower psychological safety were:
less likely to voice concerns about AI systems
less confident that their organisation would support them through technological change
more likely to worry about AI replacing their role
Across the survey’s measures, the differences were statistically significant.
The implication is straightforward. People do not challenge systems when they believe doing so could harm them. When that happens, governance that exists on paper stops functioning in reality.
When silence feels safer
In most organisations, silence does not come from apathy. It emerges from rational calculation. People weigh the consequences of raising a concern against the perceived authority of the system and the decisions already taken around it.
Questions arise quietly:
Did leadership approve this model?
Has the risk team already signed off?
Is the system producing results executives expect to see?
If raising the concern requires challenging those signals, hesitation becomes the safer option. The governance structure may appear intact. In practice, it is bypassed.
Leadership responsibility: comfort is not courage
Psychological safety is sometimes delegated to HR or culture initiatives. That framing misses the point.
The willingness to escalate concerns is shaped by leadership behaviour.
When leaders respond defensively to a challenge, escalation stops.When questioning is welcomed, signals move quickly.
In automated decision environments, leadership behaviour becomes part of the governance architecture.
If escalation is discouraged at the top, intervention will always arrive too late.
What Boards Should Ask
Boards rarely test whether escalation is genuinely safe. A simple question often reveals the reality:
If someone inside the organisation believed an automated system was producing harmful outcomes, how confident would they feel raising that concern immediately?
If the answer depends on hierarchy, politics, or uncertainty about consequences, the governance structure contains hidden fragility.
Trust is not a cultural preference. It is the condition that allows governance to function under pressure.
Governance frameworks assume people will raise concerns.
Real governance begins when it is safe to do so.



Comments