AI Live Testing: Can the FCA can support safe and responsible AI deployment?

We are caught in a bind about AI. Our pension pots depend on the Magnificent 7’s AI enterprise not being a bubble that bursts not just California’s technology but our retirement saving.

But it’s more fundamental than that! This and the info gram above are from a “big read” in the FT today

Is what we are listening to played to us by fellow humans or our replacements? Can we trust the authenticity of Jacub?

Now let’s point  that scepticism to financial services. It’s troubling me that everything from pricing pensions to answering questions from pensioners can be done by artificial intelligence; who is testing that intelligence? Is it based on authentic things like the CMI’s mortality tables, or the projections of SMPI or ERI? Or is AI kicking the member’s questions down the road because it doesn’t have a pleasant answer today?

I am looking for an answer from the regulators and I am pleased to come across this in an email from the FCA.

AI Live Testing: How the FCA can support safe and responsible AI deployment

Ed Towers – Head of department, advanced analytics and data science unit

Exactly how deep AI’s impact on our economy will go is still up for grabs. Building trust, including through having the appropriate governance and accountability processes in place, is integral to accelerating its adoption in UK financial markets.

AI Live Testing now open for applications

At the FCA, we’re providing a structured but flexible space where firms can test AI-driven services in real-world conditions, all with our regulatory support and oversight and help from our technical partner, Advai. Collaboration and communication is at the heart of what we are doing.

The first cohort joined AI Live Testing in October last year. We opened a second application window on 19 January 2026 and are now inviting applications.

Moving on from ‘POC paralysis’

AI Live Testing is for firms with mature proof of concepts (POCs) who are ready to test in controlled market environments, with a view to imminent market deployment.

Through live testing we want to help UK innovators move safely beyond ‘POC paralysis’, or what is often described as ‘perpetual pilots’.

What we’re actually testing

We’re often asked if we’re testing foundation models as part of AI Live Testing. We distinguish between the AI system and the AI model. That’s because we think the risks and benefits of an AI use case can only ever be understood in the context of its specific use case, ie at the enterprise level.

We broadly define the AI system as: the actual AI model, information on the deployment context and core risks (in the deployment context); governance and human in-the-loop considerations, evaluation techniques as well as the input and output controls. This means AI Live Testing takes a holistic approach to AI design and deployment, at the level of the deploying enterprise, rather than a narrow focus on the model itself.

How AI Live Testing works in practice

Participating firms get AI technical and regulatory support, working with subject matter experts who know about financial services regulation, AI systems testing and validation. Participation includes 3 sequential phases:

  • Discovery
  • Framework validation
  • AI system testing (including shared evaluation)

We focus on both quantitative and qualitative factors to get a truly holistic understanding of the AI system.

What’s in it for us

It’s clearly good business practice to test an AI system against its desired outcome. But, for us as a regulator, it also increases our understanding of the debate around safe and responsible AI. One of our key questions as part of AI Live Testing is how to translate principles into safe and responsible AI outcomes for financial consumers and markets. We can only do this by working with firms to establish a consensus on the way forward.

AI Live Testing also allows firms to share with us any challenges they are having interpreting and aligning with the regulatory landscape. This gives us material intelligence into what’s happening on the ground, including key risks. This helps us productively adapt and evolve in response to real industry behaviour, which is vital for any major technology shift.

How to apply

Applications for the second AI Live Testing cohort are open until 2 March 2026.

See our Terms of Reference (PDF)Link is external for more details.

Testing starts from April 2026. We’ll notify successful firms by mid-March 2026.

If you have any questions about AI Live Testing, please contact us at: AILiveTesting@fca.org.uk.


I have to be impressed with the FCA and as we ware already in the Pensions Innovation Unit , I wonder if the two can both help the delivery of the radical developments in DC and CDC that we are now going through.

About henry tapper

Founder of the Pension PlayPen,, partner of Stella, father of Olly . I am the Pension Plowman
This entry was posted in pensions and tagged , , , . Bookmark the permalink.

2 Responses to AI Live Testing: Can the FCA can support safe and responsible AI deployment?

  1. Pingback: Taking the hand threshing away from labourers. | AgeWage: Making your money work as hard as you do

  2. Pingback: The dreary work of the white-collars will be done by AI – what’s wrong with that? | AgeWage: Making your money work as hard as you do

Leave a Reply