top of page

FSB’s AI Oversight Draws Industry Frustration as Experts Warn of “Sleepwalking into a Crisis”

The Financial Stability Board (FSB) this week released its long-awaited report on how regulators should monitor artificial intelligence in global finance — and critics say it already feels outdated. While the document maps AI adoption and proposes new “monitoring indicators” for vulnerabilities like third-party dependencies and cyber risks, industry leaders argue that its approach leans too heavily on surveys, paperwork, and voluntary data collection.


Richard Bird, Chief Security Officer at Singulr AI, called the guidance a worrying déjà vu.


“It is troubling that almost exactly three years after the big Sam Altman announcement that AI is now officially changing the world we live in, the Financial Stability Board (FSB) is advocating for check-the-box oversight and 'keep your eyes open' monitoring of AI usage and exposure,” Bird said. “AI is introducing new attack methods and surfaces daily, while the FSB continues to push the decades-old mantra of declarative self-reporting compliance.”

A Framework Stuck in the Past


The FSB’s Monitoring Adoption of Artificial Intelligence and Related Vulnerabilities in the Financial Sector report outlines how authorities can track AI’s rapid integration into banking, insurance, and payments systems. It recommends voluntary firm surveys, “vulnerability mapping,” and cooperation across national regulators to standardize AI taxonomies.


But for Bird — and many practitioners on the front lines — the report’s reliance on self-disclosure ignores how AI actually evolves. “If this approach didn’t work well over the past 30 years, why would we expect it to work with AI?” he asked. “And suppose AI is truly the galactic game-changer that the large AI companies say it is: why are regulatory and compliance bodies suggesting archaic methods to control, govern, and secure it?”


The Governance Blind Spot


The FSB frames AI oversight largely as a data-aggregation challenge — collecting indicators, publishing surveys, and comparing adoption rates across jurisdictions. Bird contends that this misses the operational reality: “Governance, security, and controls are operational realities that can’t be effectively understood solely through statistical proxies.”


The group’s report highlights vulnerabilities such as model risk, third-party dependency, and generative AI (GenAI) concentration — but stops short of mandating real-time detection or mandatory inventories of deployed models. For Bird, that’s a fundamental gap.


“If we are going to make AI safe for finance, we can’t rely on policy PDFs and quarterly surveys,” he said. “We need direct demands to detect, inventory, and monitor all AI that our financial institutions are exposed to, internally and externally.”

A System Built on Trust, Not Verification


Experts say the FSB’s slow-moving frameworks clash with the pace of AI innovation. The report acknowledges that national authorities face “limited data availability” and “lack of transparency” in AI systems, yet still recommends largely voluntary reporting schemes.


That approach might have sufficed for traditional financial technologies. But as models like GPT-5 and agentic AI systems integrate into credit scoring, algorithmic trading, and fraud detection, oversight through spreadsheets seems dangerously quaint.


What Comes Next


The FSB plans further collaboration with G20 regulators to align on definitions and reporting standards. In practice, that means years of additional consultations — a timeline Bird says finance can’t afford.


Regulators, meanwhile, are stuck between innovation and inertia: push too hard, and they risk stifling growth; act too slowly, and they may fail to spot systemic threats until after the next AI-driven market shock.


For now, the FSB’s guidance offers a map of what to watch — but not the tools to see it happening.

bottom of page