As we approach 2025, financial institutions and regulatory bodies face a watershed moment in the fight against financial crime. The dual pressures of adopting AI technologies, while navigating a complex regulatory landscape, make driving down financial crime a complex and expensive task. It is also very clear that some markets have clearer guidance than others on how to reap the benefits artificial intelligence (AI) in anti-money laundering (AML) strategies.
Insufficient controls cost not only billions of dollars in fines, but also impact organisations’ reputations and the economies they transact within. The Napier AI / AML Index 2024-2025 found that global economies can save $3.13 trillion annually using AI to detect and prevent money laundering and terrorist financing.
AI is the topic that continues to cut through all the changing geopolitical focuses, consumer demands and industry trends. In 2025, AI should be understood, explained, and audited. This is compliance-first AI.
Balancing innovation and risk management in regulation
2024 marked a year of heightened focus from regulators on the risks posed by digital currencies, heavy sanctions on Russia, and a shift towards AI. Sanctions compliance will continue to be a focus, but the new US administration may well have different priorities. Currently, some markets have clearer guidance on how to reap the benefits of AI in AML strategies. The EU AI Act, for example, came into force in 2024, emphasising the need for transparency.
Broader regulatory coverage will be a welcomed change. From 2024’s Tranche 2 reforms in Australia, to Canada’s upcoming 2025 Bill C-27, including the Artificial Intelligence and Data Act, the message from regulators is clear: it’s time for institutions to get more serious about AI and AML.
Flexible, outcome-focused regulations can foster innovation, safeguard financial systems, promote inclusive access to services, and empower financial institutions to better combat financial crimes.
Continued focus on transparency
The 2024 EU AI Act set a global precedent for values of human-centricity and ethical AI going into 2025. While AI can significantly boost AML compliance by improving accuracy, efficiency, and customer experience, its improper use also poses significant risks. To mitigate these, AI systems must not only be explainable and auditable, and in 2025, financial supervisors are looking establish clear guidelines to ensure AI’s effective usage in driving down financial crime.
Compliance has an intrinsic link to adhering to environmental, social and governance (ESG) guidelines. In 2025, ESG will no longer be a ‘nice to have’, but the lifeblood of companies, for investors and customers. Relying on unproven, unexplainable AI exposes institutions to uncertainty. In 2025, an understanding, and ethical use of AI-driven insights in financial crime compliance will be non-negotiable.
Ensuring digital operational resiliency for consumers
The growing number and complexity of party arrangements in financial services and beyond has put digital operational resilience at the forefront for regulators going into 2025. Regulators are intensifying their oversight of compliance, data management, and cybersecurity.
Coming into effect in January 2025, DORA requires financial institutions to adopt stricter IT security measures, ensuring they can withstand cyber-attacks and other disruptions. A strong, well-rounded approach will be necessary, encompassing ongoing IT framework oversight, and detailed documentation to clearly demonstrate compliance.
This means consumers can have greater confidence in the availability and safety of their financial transactions and data. Globally, we will see a shift towards enhanced information-sharing and a culture of transparency in financial services, strengthening resilience and build trust across the digital financial services ecosystem.
Responsible AI risk governance practices and human teams
As financial institutions increasingly adopt AI-driven tools for AML, the need for stringent oversight becomes critical. AI systems can inherit biases from the data they are trained on, requiring diligent oversight and tuning by diverse human teams. Cross-functional teams such as compliance, data, science, and the systems they use, should work harmoniously to minimise such vulnerabilities. Effective AI governance frameworks must integrate regular reviews of models for fairness and compliance.
The job market is always a topic brought up when discussing the impact of AI. In 2025, and well into the future, humans will continue to be an important part of financial crime compliance teams, especially in the financial sector, where decisions have big impact. We are essential for making decisions based on what AI presents to us, in an explainable way. Success in financial services often comes down to positive customer experience, and this is not possible without soft skills and human decision-making.
Preparing for financial crime compliance in 2025
AI for AML has traditionally been implemented without a strong compliance-first focus. Broad AI solutions, while promising in their potential for automation, often struggle to meet expectations when not tailored to specific use cases. Generalised AI systems claiming to work across a wide range of applications can sometimes bring challenges similar to legacy enterprise solutions, which faced difficulties in delivering value for FIs in past decades.
Specialist, highly-focused, and accurately tuned solutions, purposefully designed for AML, are necessary to achieve the delicate balancing of driving down TCO and driving up compliance. Learn more about where your focus should be in 2025 in the Napier AI / AML Index.