top of page

Common Responsible AI Misconceptions in Procurement

Picture a procurement meeting. A vendor is presenting their AI system. It screens applications, prioritises cases, and flags risks. The slides are polished. The demo is smooth.


Someone asks: "Is this system responsible AI-compliant?"


The vendor says yes. Everyone nods. The meeting moves on.


But here's the problem. Nobody in that room agrees on what the question actually meant. And nobody agrees on what "yes" actually proved.


This is where most AI procurement goes wrong. Not through bad intentions. Through shared misconceptions that feel like due diligence but aren't.



Misconception 1: "They said it's ethical, so it must be"


Vendors will tell you their system is fair, transparent, and accountable. They'll show you a principles document. Maybe a certification badge.


But principles aren't proof. A statement that says "we are committed to fairness" tells you nothing about how fairness is measured, what thresholds are used, or what happens when the system drifts outside them.


Imagine buying a car and asking, "Is it safe?" The salesperson says, "Absolutely." You never ask about crash test ratings, airbag specifications, or brake response times.


You just take their word.

That's what accepting "we're ethical" without evidence looks like.



Misconception 2: "We tested it before deployment, so we're covered"


Pre-deployment testing is necessary. It's also wildly insufficient.


AI systems change. The data they process changes. The population they serve changes. A system that performed fairly in testing can drift into unfairness within months, sometimes weeks.


Think of it like a medical check-up. Passing your physical in January doesn't mean you're healthy in December. Systems need continuous monitoring, not one-time certification.


Yet most procurement specifications ask for evidence of testing. Almost none ask for evidence of ongoing monitoring.



Misconception 3: "The vendor handles governance"


This is the most dangerous misconception.


When you procure an AI system, you're procuring a decision-making capability. Those decisions affect your citizens, your service users, and your organisation's reputation. The vendor built the tool. You own the outcomes.


If the system wrongly denies someone a benefit, the vendor doesn't face judicial review. You do. If the system discriminates, the vendor doesn't respond to the Equality and Human Rights Commission. You do.


Governance isn't something you outsource. It's something you maintain. The vendor can provide tools and documentation. But the accountability sits with you.



Misconception 4: "Compliance means responsible"


Compliance is a floor, not a ceiling.


A system can be fully compliant with current regulations and still cause harm. Regulations lag behind technology. They set minimum standards, not best practices.


More importantly, compliance is about legal risk. Responsibility is about actual impact. A system that's technically compliant but consistently produces worse outcomes for certain groups isn't responsible. It's just defensible in court.


That's not the same thing.



What Procurement Should Actually Ask


The shift isn't complicated. It's just different from how most specifications are written today.


Instead of "Is this system fair?", ask "How is fairness measured, and what are the thresholds?"


Instead of "was this tested?", ask "how is this monitored in production, and who sees the results?"


Instead of "Do you handle governance?", ask "What governance capabilities do we need to operate this responsibly, and what does that require from us?"


These questions change the conversation. They surface what's actually needed to run the system well, not just to buy it.



The Gap That Matters


Most procurement processes focus on acquisition. Features, price, and delivery timeline.


Very few focus on operation. What happens after go-live? What happens when something goes wrong? Who notices. Who decides. Who acts.


That gap between buying and running is where responsible AI either works or fails.



Over the next 12 weeks, I'm explaining RAI monitoring in plain English - what it is, why it matters, and how it works in practice. All free. Follow our page.





 
 
 

Comments


bottom of page