View all newsletters
Receive our newsletter - data, insights and analysis delivered to you

Progress on regulating AI in financial services is slow, perhaps for good reason

UK financial watchdog the FCA is taking its time drawing up new AI regulations. It may be right to tread carefully.

By Stephanie Stacey

AI isn’t anything new in the financial services sector. For several years, the technology has been used to support everything from automated stock trading to processing loan applications using credit-scoring algorithms. But AI’s rapid technological and commercial growth, which has enabled companies to process vast quantities of raw data, is a troubling prospect for regulators, especially those, like the Financial Conduct Authority (FCA), that are charged with ensuring honesty and fairness from technology often branded a ‘black box’ due to the opaque way it operates.

AI Robot behind bars
Regulators have been tasked to ensure consumer protection without stunting business innovation. (Image by Shutterstock)

“I think one of the biggest challenges with regulation is the pace at which technology is evolving,” says Klaudija Brami, who works on legal technology at law firm Macfarlanes. “You’ve got this cat-and-mouse situation between technology development and regulation.”

While the EU has attempted to craft an all-encompassing, cross-sectoral set of regulations in the form of the AI Act, the UK is taking a more hands-off, principles-based approach. Individual regulators, like the FCA, are essentially being asked to cultivate responses to the technology on a sector-by-sector basis – a strategy that’s intended to offer a dynamic, pro-innovation environment to help fulfil Rishi Sunak’s pitch to make the UK a global hub of AI regulation.

But there’s still a long way to go. The FCA and the Bank of England issued their latest discussion paper on AI last October, a month before ChatGPT saw the light of day and threw AI into the global limelight. Since then, the promises and risks of AI have only grown more prominent, as the FCA’s chief executive, Nikhil Rathi, recently acknowledged, but haven’t yet been met by formal regulatory responses. 

The FCA might have good reason to take its time. AI is a rapidly changing and powerful technology, and some fear that setting fixed rules could clip its wings and impede British innovation. But AI can also spark its own problems and accentuate existing inequalities, meaning it remains a sticking point for regulators. What’s coming down the line? And can existing rules and regulations stand up to the rapid rise of AI? 

What’s the FCA doing about AI? 

In a speech at the beginning of July, Rathi said: “The use of AI can both benefit markets and can also cause imbalances and risks that affect the integrity, price discovery and transparency and fairness of markets if unleashed unfettered.” He promised a pro-business approach from the regulator, saying it would open up its AI “sandbox”, which enables real-world testing of products that aren’t yet compliant with existing regulations, to businesses eager to test out the latest innovations. “As the PM has set out,” said Rathi, “adoption of AI could be key to the UK’s future competitiveness – nowhere more so than in financial services.”

There’s a lot of talk about AI’s promise, but what kinds of risks will the FCA want to avoid? While algorithm-powered financial trading is well established, the big difference, amid the rapid rise of large-scale AI, is the increasingly widespread use of non-traditional data, such as social media behaviour or shopping habits, in consumer-facing financial services like loan-application assessments.

The FCA and other regulators are concerned, in particular, about the prospect of consumer detriment arising from AI models trained on inherently-biased, inadequately-processed, or insufficiently-diverse datasets. “If there are biases or gaps in the data, AI systems are going to perpetuate or entrench inequalities that we already see within society,” says Brami.

Content from our partners
How distributors can leverage digital tools for successful customer experience
How Midsona accelerated efficiency and reduced costs with a modern ERP system
Streamlining your business with hybrid cloud

There’s also the ever-lurking problem of explainability. Even the creators of AI models sometimes don’t know why they’re making their decisions, but businesses will likely need to be able to explain the reasoning behind their tools and algorithms if they want to avoid a regulatory crackdown. 

FCA
The FCA has a special AI Strategy Team, which is charged with exploring the risks and opportunities of the technology. (Image by IB Photography/Shutterstock)

Slow and steady wins the race

Tim Fosh, a financial regulatory lawyer at Slaughter and May, says he certainly doesn’t envy the regulators. “They want to promote competition, which is one of [the FCA’s] new objectives, and they don’t necessarily want to stifle the promise in the area, because it could create considerable opportunities,” says Fosh. Nevertheless, there is heavy pressure to protect consumers and take visible action amid a moment of fraught public discourse around AI, a debate which has intensified since the launch of OpenAI’s GPT-4 large language model and competitors like Google Bard

“You don’t want to throw the baby out with the bath water just for the sake of making regulation,” says Fosh. That’s why, he speculates, regulators like the FCA have thus far been cautious about putting forward any formal proposals, even though they’ve been closely examining AI for several years. “Because of the dynamic nature of the industry, they don’t want to be regulating strictly at a point in time when everything is moving so fast,” Fosh says. Instead, he predicts a future marked by interpretive, principles-based guidance — a proposition that’s largely consistent with the UK’s broader sector-led and tech-neutral approach to AI. 

It’s not just the regulators facing an uphill battle. It’s also pretty tricky for businesses, developers and lawyers trying to keep up. “One of the mantras of start-ups is ‘move fast and break things’, but in a regulatory context that’s clearly very dangerous,” says Fosh. “New challenges and constraints are essentially being discovered by firms on a daily basis as they try to put these things in place. They try to comply with their obligations and find that the regulations, as they’re currently drafted, don’t neatly match up with what they’re trying to do.”

There’s a chance that regulators could ultimately require a specific manager within each organisation to take responsibility and accountability for AI in an expansion of the existing UK Senior Managers and Certification Regime, which makes individuals accountable for a company’s conduct and competence. “That’s the key touch point: that the FCA will have a human to hold accountable if something goes wrong,” says Michael Sholem, a partner at Macfarlanes. Nevertheless, this kind of proposition might require some serious governmental support, otherwise, it’s probably not a professional responsibility that too many people would want. “How does that person ever get comfortable?” asks Fosh.

AI might seem new and shiny — as well as perplexing — but the FCA still has a lot of history to fall back on. Indeed, many of the ground rules that will govern AI might already be in place. “The FCA has been very clear that although they’re consulting on how to change the regulatory regime to deal with AI and ML, ultimately the FCA’s Principles for Businesses apply across these activities,” says Sholem. “There’s not, at this time, a need for a fundamental overhaul of everything to do with financial services regulation just to deal with AI and ML.”

Read more: UK government approach to AI leaves workers disadvantaged, Labour says

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU