Financial services firms want more guidance relating to artificial intelligence (AI) and not “necessarily more rules”.
That was the view of the Financial Conduct Authority’s director of intelligence, digital & innovation Ian Phoenix when speaking at the TISA AI conference today (3 February)
As AI becomes more and more embedded in financial services, Phoenix said the city watchdog wants to ensure it is done in a safe way.
“Innovation carries the greatest value when it is responsible,” he added.
However, the technology is not a “means to an end”, Phoenix warned.
In 2024, a joint survey from the FCA and the Bank of England (BoE) found that 74% of financial firms were already using AI, with a third using it for fraud detection.
Phoenix said the FCA is supporting firms at every stage on its AI journey.
In December 2025, the FCA announced it is working with major firms to test AI in a safe place to better understand the potential benefits and risks.
The AI Live Testing initiative was the first of its kind in the financial sector to help firms who are ready to use AI in UK financial markets, according to the FCA.
Participating firms receive tailored support from the FCA’s regulatory team and its technical partner Advai to develop, assess and deploy safe and responsible AI.
The AI Live Testing complements the “supercharged” sandbox the FCA launched in collaborated with Nvidia in June 2025 to help firms experiment safely with AI to support innovation.
Phoenix also said that supporting the industry is only half the story, firms must also “sort its data out” as data is a big part of AI.
Still, AI is a “tool” that complements human judgement and does not replace it.
If used correctly, AI can strengthen the UK’s role as a leader in financial services.
Phoenix concluded his talk by saying: “If your organisation is using AI we encourage you to talk to us, as we will help you navigate this technology.
“The FCA is here to support responsible innovation.”
Leave a comment