Artificial intelligence (AI) is rapidly transforming the way we live and work. It’s being used in a wide range of different human settings, from vehicles that drive themselves, to unpaid and ever-cheerful virtual assistants, to state-of-the-art AML compliance systems. AI algorithms can analyse vast data sets to identify the nuanced and subtle indicators of criminal financial activity that an experienced human analyst may miss. Striking the right balance between the capabilities of AI and human expertise is essential to getting the best out of automation.
The effective development and deployment of AI for financial crime compliance can be challenging, given that globally, regulators are adopting different approaches to AI and are moving at different speeds. Different jurisdictions around the world have taken different approaches to regulating AI, with some focusing on sector-specific regulations and others adopting more overarching frameworks.
Europe
The European Union (EU) has been the most proactive in developing an overarching AI regulatory framework. This framework is industry agnostic and takes a risk-based approach to AI regulation, focusing on the use of AI systems and associated risks across all industries.
The EU's AI Act, introduced in 2021, sets out rules for the development and use of AI in the EU. It enshrines the principles of privacy, transparency, and fairness in regulatory frameworks while laying down an AI risk classification which financial institutions will need to consider before adopting an AI system. The General Data Protection Regulation (GDPR) acts as a comprehensive data protection law that regulates the collection, processing, and storage of personal data within the European Union.
United Kingdom
While the UK is yet to develop specific AI regulations, the Artificial Intelligence Public-Private Forum (AIPPF) was set up by the Financial Conduct Authority (FCA) and Prudential Regulation Authority (PRA) to explore the use of AI in financial services. The AIPPF's final report indicates the direction of travel for the regulation of AI in the UK's financial services sector and states that AI is very effective in tackling financial crime in AML and fraud, where its adoption is widespread.
By including Model Risk Management procedures when adopting AI to mitigate new and unique risks associated with the technology, firms can safely reap the benefits of AI and ML models while complying with regulations and building supervisory trust.
United States
In the US, the Algorithmic Accountability Act (AAA) was introduced in 2022, requiring impact assessments for bias and effectiveness, and extending power to the FTC to implement more stringent regulations. In January 2023, the National Institute of Standards and Technology (NIST) announced a voluntary framework to improve the ability to incorporate trustworthiness (“lawful, ethical and technically robust”) considerations into the design and use of AI systems.
APAC- Singapore, Hong Kong and Australia
Singapore and Hong Kong have led the AI regulatory agenda in the APAC region by introducing guidelines and non-binding frameworks for market players to adopt. Unlike the UK and EU, neither plans to introduce AI-specific regulations. Singapore’s Model Governance Framework offers an industry agnostic guide to firms seeking to implement AI. It prescribes a risk-based approach to measures such as data privacy, model explainability, robustness and tuning, and advises matching the level of human involvement with the corresponding risk level of the AI augmented decision-making.
Hong Kong's Guidance on the Ethical Development and Use of AI largely mirrors that of Singapore and provides a set of recommended best practices for the development and use of AI. The guidance advises firms to establish internal governance structures with relevant training and awareness-raising exercises for personnel involved in AI development. The Hong Kong Monetary Authority also prescribes guiding principles for consumer protection in respect of the use of big data analytics and AI incorporating feedback from the banking industry.
The Australian government has prescribed voluntary AI principles to promote trust, safety, security and reliability in AI deployment. These principles are aligned with the country's AI Ethics Framework which institutions are encouraged to follow whenever they design, develop, and implement AI systems.
As AI continues to evolve and play an increasingly significant role in AML, it is crucial for policymakers to stay informed about the latest technological advancements and adjust their regulatory frameworks to tackle money laundering and other financial crimes effectively. But whether it's red tape or a green light, the world is learning that responsible AI innovation requires a concerted global effort towards compliance regulation. Ultimately, effective and responsible AI regulation will require collaboration and cooperation among stakeholders across the globe.
Napier has put together a whitepaper- “AI Regulations: Global Approaches to Combat Money Laundering” that will enable financial firms to understand the direction of travel for AI regulation globally, thereby futureproofing solutions and ensuring the right governance to implement AI ethically.
Access your copy here
Photo by Steve Johnson on Unsplash