top of page

🚨🚨🚨Why aren’t all AI models being monitored for RAI compliance?🚨🚨🚨

The alarm bells should be ringing. With EU AI Act fines reaching up to €35 million or 7% of global turnover, and major compliance deadlines already in effect since February 2025, we’re seeing a concerning gap in AI model monitoring.


The UK’s AI Playbook, published just this February, emphasises the need for “meaningful human control” and understanding AI limitations. Meanwhile, the EU AI Act’s transparency and documentation requirements for high-risk AI systems became binding on 2 August 2025.


So why the monitoring gap?



The evidence points to several factors:


Organisations that haven’t conducted internal risk assessments to identify prohibited practices are already non-compliant. As IBM’s Terry Halvorsen notes, “Agencies are sometimes in too much of a hurry to get AI running”.


While major tech companies like Microsoft implement compliance tooling to monitor and enforce RAI rules, many organisations are still developing structured programmes to raise awareness of AI development risks.



The gap isn’t just technical - it’s structural:


Banks are appointing dedicated Responsible AI leaders to complement existing GRC functions, but this appears to be the exception rather than the rule across industries.


The Responsible AI Institute’s new monitoring tools, including their RAI Watchtower Agent for compliance gaps and security vulnerabilities, only began rolling out in Q2 2025.



The bottom line:



The European Commission has made it clear there are no plans for transition periods or postponements. Organizations need monitoring frameworks now, not later.


With obligations for high-risk AI systems requiring risk assessment, data quality, documentation, and transparency, the question isn’t whether to monitor - it’s how quickly you can implement proper governance.



The regulatory clock is ticking. Are you monitoring? Reach out, may be I can help? 





 
 
 

Comments


bottom of page