top of page


While Everyone Panics About the EU AI Act, Smart Companies Are Getting Ahead"
Remember when GDPR hit and everyone panicked? Well, here we go again. The EU AI Act is live, and honestly? Most companies I talk to are flying blind. Here's the thing - while your competitors are scrambling to figure out what "high-risk AI systems" even means, there's a massive opportunity sitting right in front of you. I've been working with companies across Europe and the UK, and the ones getting this right aren't just checking compliance boxes. They're actually turning AI
shashikantsingh090
Dec 29, 20252 min read


Guarding the Guards: What Welfare AI Reveals About the UK’s Regulatory Blind Spot
Did you know that: Under (Chapter II, Art. 5) EU AI Act, which most AI service providers comply with, are prohibited from: - "social scoring, i.e., evaluating or classifying individuals or groups based on social behaviour or personal traits, causing detrimental or unfavourable treatment of those people". Computer Weekly article "DWP 'fairness analysis' revealed bias in AI fraud detection system" which states: "the assessment showed there is a 'statistically significant refer
shashikantsingh090
Dec 26, 20252 min read


Beyond Big Tech: How Switzerland’s Apertus Is Redefining Responsible AI
Switzerland has just released Apertus an open-source AI for everyone. Seems like the first truly Responsible AI for public use 🌍 Meet Apertus - the AI representative from the Public AI Initiative, built on 15 trillion tokens of multilingual training data through the Public AI Inference Utility. What makes it different from ChatGPT and Claude? ✅ Fully open-source (Apache 2.0 licence) - It allows you to inspect the code and training data ✅ Swiss-engineered with cultural sensit
shashikantsingh090
Dec 25, 20251 min read


Responsible AI: Less Aspiration, More Operation
It’s easy to say “we want ethical AI.” It’s harder to operationalise it. Looking at assurance checklists, a few themes always stand out: • User access management: Who can touch training data, and how often are permissions reviewed? • Vulnerability management cadence: Are model security checks monthly, quarterly, or ad-hoc? • Explainability logging: Is every model decision traceable back to inputs and assumptions? • Sustainability: Do we measure compute costs and energy us
shashikantsingh090
Dec 24, 20251 min read


When AI Goes Wrong: Why Good Intentions Aren’t Enough
In 2019, a major tech company quietly scrapped its AI recruiting tool after discovering it was systematically downgrading CVs from women. The algorithm had taught itself that male candidates were preferable by learning from a decade of historical hiring patterns. The company caught it, but only after it had been in use. The question that haunted the post-mortem wasn’t just “how did this happen?” It was “what else don’t we know about how this system makes decisions?” That’s th
shashikantsingh090
Dec 19, 20254 min read


“The Govern–Act–Monitor Loop: A Practical Framework for AI Oversight”
I promised an update — here’s a first look at something I’ve been working on. I’ve designed and developed a practical, no-nonsense approach that gives people visibility and control over how AI operates — in real-world systems, not just theory. 🔁 At its core is a Govern–Act–Monitor loop: 🔎 Visibility — Map risks clearly and identify exactly where human intervention is needed 🛠️ Action — Empower teams to scope and manage mitigations, not just tick boxes 📈 Governance — Monit
shashikantsingh090
Dec 18, 20251 min read


“Responsible AI in Practice: Building Fairness Monitoring into Azure MLOps”
Responsible AI isn’t just a buzzword - it’s about building systems that work fairly for everyone. One of the biggest challenges I see with Azure OpenAI deployments? Teams focus on accuracy metrics but forget to measure whether their models treat different groups fairly. That’s where parity metrics become essential. Responsible AI needs measurable fairness: • You can’t manage what you don’t measure • Parity metrics reveal hidden biases that accuracy scores miss • They help you
shashikantsingh090
Dec 17, 20252 min read


🚨🚨🚨Why aren’t all AI models being monitored for RAI compliance?🚨🚨🚨
The alarm bells should be ringing. With EU AI Act fines reaching up to €35 million or 7% of global turnover, and major compliance deadlines already in effect since February 2025, we’re seeing a concerning gap in AI model monitoring. The UK’s AI Playbook, published just this February, emphasises the need for “meaningful human control” and understanding AI limitations. Meanwhile, the EU AI Act’s transparency and documentation requirements for high-risk AI systems became binding
shashikantsingh090
Dec 16, 20252 min read


How Value Sensitive Design Reveals AI's Hidden Social Costs
Why We Need a Better Diagnostic Tool We have plenty of AI ethics frameworks telling us what responsible AI should look like. What we don't have is a good way to examine what AI is actually doing once it's out in the world. That's where Value Sensitive Design becomes useful. VSD, developed by Batya Friedman, gives us a structured way to look at how technology affects human values. It works through three investigations: conceptual (which values matter), empirical (what's actual
shashikantsingh090
Dec 13, 20255 min read


AWS re:Invent 2025 — Part 1 Reflections: Standing in the Future I Once Researched and Imagined
This year at AWS re:Invent 2025 felt like a convergence between two worlds I’ve inhabited for years. On one side, I arrived as a Principal Technologist enabling AI across the UK Public Sector , focused on the practical “Hows” — the architectures, controls, evaluation frameworks, and governance needed to scale Agentic AI for public services safely. On the other side, I walked in with the mindset formed during my PhD research concluded in 2016 , where I explored: context-aw
shashikantsingh090
Dec 12, 20255 min read


The Complex Relationship Between AI Investments and Human Needs in a Rapidly Evolving World
The technology has already woven itself into our lives. The only question is which companies survive the journey. Is AI in a bubble? Probably. Will it crash? Quite possibly. Does any of that matter for AI's long-term trajectory? Not really. Even Sam Altman, the CEO of OpenAI, has acknowledged that investors are "overexcited" about AI. The Bank of England has warned about the risks of a global market correction. Apollo Global Management's chief economist has stated that the cu
rajeev385
Dec 9, 20254 min read
bottom of page
