Grant Powell MBCS spoke to Nathalie Lowe and Dr Henrike Mueller from the Financial Conduct Authority about their work in AI and the challenges associated with regulating technically evolving markets.
The Financial Conduct Authority (FCA) is the conduct regulator for over 50,000 financial services firms and financial markets in the UK. With dramatic growth in the use of AI globally bringing many benefits and new challenges, the FCA has a responsibility to ensure that financial markets continue to remain honest, competitive and fair. Nathalie Lowe, Senior Manager and Accountable Executive for Transforming Data Collection, and Dr Henrike Mueller, AI Strategy Team Leader, elaborate.
Can you introduce yourselves and provide some background to your careers?
Nathalie: I have been at the FCA for just over 11 years. My background is primarily in project management, but I started hearing more and more about the importance of data to the point that I decided that’s what I wanted to be involved in. I joined a team focusing on analytics, where all the data scientists were, and started project managing data science projects. From there I became a private secretary to the Chief Data Information Intelligence Officer and I’m currently Senior Manager for Transforming Data Collections, which is a joint programme with the Bank of England. We use data to provide a better service for the 50,000+ firms that we regulate.
Henrike: I have a PHD in Financial Service Regulation and found myself in the AI space due to a fascination with the subject. In 2017 I managed an international affairs team and I got invited to a meeting by the Financial Stability Board (FSB) which is the global standards setting body for central banks. They were running a two day event which covered AI and how it would impact financial markets. It was so interesting that, as a result, I have been working in this space ever since. I currently head up the FCA’s AI Strategy team and, like Nat, I’ve been here for close to 11 years.
How is the prevalence of AI affecting your work and how is it being incorporated into services?
Nathalie: When ChatGPT came out the world changed. We had to pivot quickly, and the FCA has gone from having just one team focusing on AI to employing a much more widespread horizontal focus. AI now impacts everything and is everywhere, and because of the types of firms we regulate we have to be one step ahead. Due to the impact that we have on society and the size of the market we have to make sure AI is beneficial but it also has to have the have the right guardrails in place to make sure things move in the right direction.
Henrike: I think there are two angles to this. First of all we are a regulator, so we need to set guides and rules for industry on complex models such as AI. Secondly, as an organisation we have to think about how we want to use the technology. And, of course we need to align what we want to do internally with what we’re saying externally. It’s very exciting, and both Nat and I have been involved in the external engagement but also the development of internal frameworks for what kind of AI we want to use and for what purposes. We have to consider that whatever we want to do is not only compliant with our own handbook but is also ethical. We have conducted various pieces of research over the past year, and have learned a lot from the responses. Findings from our 2 year joint project with the Bank of England are set out in some detail in our AI Public-Private Forum final report, and for me that was such an insight into the debate around AI use and probably one of the pieces of work I’m most proud of. It’s critical to be able to understand the challenges of using AI within financial services as well as establishing how we can support safe adoption of the technology, so it was invaluable to understand the views of a community of practitioners, separated from the hype.
What are the main benefits and challenges of AI use and how do these present in financial services?
Nathalie: There are so many tangible benefits such as the ability to reduce human error, 24/7 availability, and the removal of repetitive tasks to free up humans so that they can concentrate on those tasks that add greater value. AI’s ability to identify patterns is incredibly useful when you’re an organisation like ours. One example might be the analysis of data transfer or behaviour patterns to spot criminal activity. We have quite a leadership role in this and, together with the Digital Regulation Cooperation Forum (DRCF), we can use AI to ensure greater cooperation on regulatory matters. As a regulator we need to understand the benefits and dangers relating to AI, and how to promote healthy competition while also preventing harm.
Henrike: The technology has huge potential and one shouldn’t be too scared to use it. And there are so many models under the general term AI, all with different use cases; natural language processing, text analysis and the like. It’s absolutely massive. Many firms are using AI to improve operational efficiency on many levels, and this is absolutely great. In terms of challenges it’s really all about how to use AI whilst being respectful of the rules and regulations that are in place, whether that is data protection or our own financial sector rules, and really just working through the practicalities of that. And that’s going to take some time. Rather than challenges in terms of barriers it’s more about finding ways to unlock all of the benefits that AI offers, but doing so responsibly and safely.
Why is collaboration with stakeholders so important?
Nathalie: As we know, AI crosses boundaries. You can’t have a boundary around AI, so in order to be successful it’s important to be collaborative. I think that the issues that we face as companies, as a society and even globally can be helped through a joined up approach to AI deployment. This will help us address business challenges as well as larger issues such as climate change. No single body can provide all of the answers or all of the strategy, so we have to work together. The FCA collaborates with organisations both inside and outside of financial services, as well as chairing the Global Financial Innovation Network, a collaboration between 80 regulators which is currently discussing AI, cryptocurrency, ESG and other key areas of interest.
Henrike: When considering the most important element in the financial space, people always expect you to say it’s resources; it’s people; it’s knowhow. But no, it’s actually stakeholder engagement — it’s that collaboration piece. Technology is evolving constantly, so we need to work with all relevant parties to make sure that what we’re doing is the right thing. Ultimately, we need to ensure that we meet the need of consumers. We work very closely with government, academia and both domestic and international regulators, so it’s a constant process of engagement.
Be part of something bigger, join BCS, The Chartered Institute for IT.
Nathalie: Yesterday we concluded a three month TechSprint with a focus on greenwashing. We worked with multiple regulators, academia and tech people, coming together to work out how we identify this, and how we can increasingly use AI to help us moving forward. The results will be shared later in the year. A previous TechSprint focused on money laundering and yielded some interesting results. Initiatives such as this help us explore how we can improve regulatory effectiveness using AI as a fast and accurate way to monitor and flag potential areas of interest or concern.
Where are the opportunities for AI moving forward?
Nathalie: The ability for businesses to hyper personalise services rather than offering a one size fits all approach will be highly beneficial for consumers and dramatically improve their experience, not just with banks and financial institutions, but across all manner of services. Additionally, as part of our secondary international competitive objective (SITCO), we’re helping to position the UK as a global leader in AI which will result in a positive cycle of development with more firms wanting to work with the UK and a resulting boost in investment, products, profit and productivity.
Henrike: Another opportunity is the use technology to transform the way that financial advice is provided, making it much more personalised. There would need to be conversations about how AI can be compliant with the necessary regulative frameworks, but I think that there is massive potential here. We also mentioned money laundering earlier, and this is an area of huge concern because it represents a significant cost to the economy and affects the reputation of businesses. We have started running a project using synthetic AI generated data. It’s a very active research area involving close collaboration with banks. AI can be used to identify and make predictions about fraudulent money flows based on analysis of data patterns. There is lots of exploration happening in this area, lots of debate, and I think it’s an extremely interesting area where we’ll see much positive impact in the years to come.
Finally, as women working in AI, what are the benefits of diverse teams?
Henrike: Differences of views, opinions and backgrounds are so important. Diversity of thinking is such an asset because it brings to a team a spirit of enquiry, challenge and collaboration. Having a supportive, inclusive culture which values differences of opinion is absolutely key because everybody has something really important to contribute and every perspective is valuable.
Nathalie: Most leaders really do want diverse teams. The businesses that perform best are those that understand their markets, and markets are diverse. Also thinking about AI, it’s still relatively new so it’s not one of those careers where there are people with 20 years of AI experience and the barrier of entry is very high. There’s such a scope within technology and AI, and such an opportunity to shape job roles and make them your own within a supportive, diverse and encouraging environment.