The Financial Conduct Authority (FCA) published its AI Update last year as part of a push for outcomes-focused AI models in financial services. To continue to deepen its understanding of how AI is being deployed in UK financial markets, the FCA recently launched the AI lab. The AI Lab offers a platform for the FCA, firms, and stakeholders to collaborate on AI-related insights, discussions, and case studies, fostering safe and responsible AI adoption in UK financial markets.
As part of it, the FCA sought stakeholder’s views about current and future uses of artificial intelligence through an AI Input Zone questionnaire to inform and support opportunities for innovation in the United Kingdom.
Napier AI responded on the risks, opportunities, benefits, and threats of using AI for financial crime compliance:
What AI use cases are you considering or exploring in your firm/organisation? What do transformative AI use cases look like in the next 5 to 10 years?
At Napier AI, we use a variety of AI models and tools, including regression models, classification, segmentation, large language models (LLMs), forecasting, dimensionality reduction, and reinforcement learning in screening, and monitoring solutions for financial crime compliance. We also employ synthetic data, generative adversarial networks and cooperative agents for continuous testing of scenarios and rules with our models.
Financial institutions are already embracing AI for compliance; and in 5 to 10 years quantum machine learning techniques may become available and normalised in financial services, and more importantly, sophisticated criminals. As automation of such tools increase, institutions would be able to do more with the same resources in real-time. Such a view of high-dimensional patterns will enable a holistic understanding of clients, businesses and transactions in context of their sector.
We’re currently seeing the transformative nature of large language models (LLMs) especially in terms of data gathering and summaries. As it changes and improves over time, we need to make sure that human agency is kept in terms of identifying new financial crime typologies that pre-trained models won’t. Instead of being reactive to, and following the sophistication of criminals, regulations and tools should be prepared for such changes to be ahead.
Are there any barriers to adopting these use cases currently, or in the future?
A significant barrier is the high costs associated with implementing AI solutions, which require substantial financial investment in infrastructure, software, and talent. Smaller institutions often do not have the budget to afford these expenses. Integrating AI systems with existing legacy systems may be difficult and costly, requiring significant upgrades or replacements.
Another barrier is the lack of sector specific expertise; developing and maintaining such AI systems requires specialised knowledge not only in data science, machine learning, and AI technologies, but in financial crime compliance typologies which small institutions may not possess. Finding data scientists with specific domain knowledge in financial crime compliance is crucial when designing and tuning systems, as well as responding to regulator requests for audit and explanation. Type III statistical errors, where the system returns the right answer but for the wrong reason, are one of the major concerns of regulators in the application of AI to AML. This includes knowing the right questions to ask regarding explainability/ accuracy measures of AI models, as a sub-par implementation without these will introduce even more risks to their services. It is also important to assess and analyse an institution’s AI-readiness before blindly adopting GenAI or other tools to match competitors without thoroughly understanding its specific use cases and outcomes. Even when done right, there should be continuous retraining of AI systems and its users to achieve desired outcomes.
Furthermore, AI models necessitate large amounts of high-quality data, and small financial institutions may struggle to gather, clean, and maintain such data due to smaller customer bases and less comprehensive data collection systems. Not every firm is inherently well placed to collate customer and transaction data, not to mention data from external sources – as this is necessary to generate a holistic picture of customer behaviour in financial crime compliance. Data security risks also arise as insufficient data security measures can lead to breaches, exposing sensitive financial data and damaging trust.
Regulatory and compliance issues also pose a challenge, as navigating the complex regulatory landscape related to AI can be daunting for small institutions with limited legal and compliance resources. Operational risks are a major concern, as ineffective or poorly implemented AI systems can lead to operational disruptions, errors in decision-making, and reduced service quality. Additionally, the inability to leverage AI can result in a competitive disadvantage, with larger institutions using AI to enhance customer experiences, reduce costs, and innovate products.
Differences in jurisdictional approaches to AI regulations across the world pose significant challenges for enterprises operating on a global scale. These discrepancies can lead to fragmented compliance requirements, forcing global companies to adhere to the strictest regulations across all markets,
which consumes time and resources and could stifle innovation. The need to navigate varying legal frameworks may also hinder the uniform development and deployment of AI products, leading to increased costs and complexity.
Additionally, inconsistent risk levels and regulatory standards across regions can create competitive imbalances, where companies in more lightly regulated areas may have an advantage in innovation and speed to market, while those in stricter jurisdictions face greater operational burdens. This regulatory patchwork can slow down AI advancements and disrupt the global competitive landscape, making it harder for businesses to manage AI-related risks and achieve cohesive enterprise-wide strategies.
Is current financial services regulation sufficient to support firms to embrace the benefits of AI in a safe and responsible way, or does it need to evolve?
Partnerships and collaboration with fintech companies, academic institutions, or larger banks and guidance/ support from regulators can help share resources
and expertise. In the high-stake sector of financial crime compliance, this could mean access to an industry-standard, open-source regulated data set that encapsulates known typologies but is minus the personally identifiable information (PII) to train models. This will help eliminate AI models trained with unrepresentative data sets to exacerbate biases, manifested as false positives in compliance checks, fraud detection or misclassify customers’ creditworthiness, leading to unwarranted account
freezes or loan denials.
The FCA Innovation Group’s Synthetic Data sub-group is an excellent example of such collaboration. Using synthetic data sets can be used to enhance explainability and protect against potential bias risk - as it recreates the risky activity in isolation from personally identifiable information (PII) and protected characteristics and then distribute it through your data to ensure your models are learning the correct patterns. This helps in the fundamental understanding of the data you have and what is possible and what is not, with the considerations needed for effectiveness and accuracy.
Regulators could also aid in creation of such synthetic data of known AML typologies through data obfuscation: taking real data and anonymising it. This has the advantage of having a shape and variability that is aligned with the real world. But extreme care must be taken that individuals and entities can’t be identified by inference or a process of elimination.
What specific changes or additions to the current regulatory regime, or areas of further clarification/guidance, do you think are needed?
Global regulators have introduced and strengthened AI regulations increasingly in the past year. The mandatory guard-rails for high-risk AI proposed by the Australian Government is an example of prescriptive AI guidance. The recently published thematic review of banks’ AI (including Generative AI) model risk management practices by the Monetary Authority of Singapore (MAS) is also a welcome move. In the US, the Department of Treasury published a report on the uses, opportunities and risks of artificial intelligence in the financial services sector.
When different regulators have differing standards and controls over AI implementation, it can lead to operational inefficiencies and inconsistencies for institutions operating multi-nationally. For global companies to adopt these effectively, regulators should collaborate to establish a uniform standard of AI implementation to curb competitive imbalances and operational burdens in only some regions. A shared global standard would also streamline auditing and reporting processes, ensuring consistency and transparency across jurisdictions.
Regulators should provide clearer guidance on the minimum standards AI models must meet to be deemed acceptable in financial services. This could be achieved through regulatory guardrails or by aligning with established frameworks such as the IEEE guidelines for artificial intelligence. Additionally, requiring regular audits to ensure model outcomes are explainable to end users and setting minimum qualification standards for data scientists in regulated financial institutions would strengthen oversight. Given the high-stakes nature of financial crime compliance, AI models should adhere to robust standards of accuracy, ethics, and transparency that align with the expectations of the FCA.
Interested in global regulatory approaches to using AI?