In brief
- The UK’s Treasury Committee warned regulators are leaning too heavily on existing rules as AI use accelerates across financial services.
- It urged clearer guidance on consumer protection and executive accountability by the end of 2026.
- Observers say regulatory ambiguity risks holding back responsible AI deployment as systems grow harder to oversee.
A UK parliamentary committee has warned that the rapid adoption of artificial intelligence across financial services is outpacing regulators’ ability to manage risks to consumers and the financial system, raising concerns about accountability, oversight, and reliance on major technology providers.
In findings ordered to be printed by the House of Commons earlier this month, the Treasury Committee said UK regulators, including the Financial Conduct Authority, the Bank of England, and HM Treasury, are leaning too heavily on existing rules as AI use spreads across banks, insurers, and payment firms.
“By taking a wait-and-see approach to AI in financial services, the three authorities are exposing consumers and the financial system to potentially serious harm,” the committee wrote.
AI is already embedded in core financial functions, the committee said, while oversight has not kept pace with the scale or opacity of those systems.
The findings come as the UK government pushes to expand AI adoption across the economy, with Prime Minister Keir Starmer pledging roughly a year ago to “turbocharge” Britain’s future through the technology.
While noting that “AI and wider technological developments could bring considerable benefits to consumers,” the committee said regulators have failed to provide firms with clear expectations for how existing rules apply in practice.
The committee urged the Financial Conduct Authority to publish comprehensive guidance by the end of 2026 on how consumer protection rules apply to AI use and how responsibility should be assigned to senior executives under existing accountability rules when AI systems cause harm.
Formal minutes are expected to be released later this week.
“To its credit, the UK got out ahead on fintech—the FCA’s sandbox in 2015 was the first of its kind, and 57 countries have copied it since. London remains a powerhouse in fintech despite Brexit,” Dermot McGrath, co-founder at Shanghai-based strategy and growth studio ZenGen Labs, told Decrypt.
Yet while that approach “worked because regulators could see what firms were doing and step in when needed,” artificial intelligence “breaks that model completely,” McGrath said.
The technology is already widely used across UK finance. Still, many firms lack a clear understanding of the very systems they rely on, McGrath explained. This leaves regulators and companies to infer how long-standing fairness rules apply to opaque, model-driven decisions.
McGrath argues the larger concern is that unclear rules may hold back firms trying to deploy AI to an extent where “regulatory ambiguity stifles the firms doing it carefully.”
AI accountability becomes more complex when models are built by tech firms, adapted by third parties, and used by banks, leaving managers responsible for decisions they may struggle to explain, McGrath explained.
Daily Debrief Newsletter
Start every day with the top news stories right now, plus original features, a podcast, videos and more.
