Artificial intelligence is coming to the fore as the answer to rising compliance costs and helping financial institutions keep up with an increasingly complex criminal and regulatory landscape.
But before analysing the market for solutions or thinking about implementation, there are steps and considerations that should not be overlooked if an organisation is to get the best out of an AI-powered system, including whether AI is even the most suitable option.
Napier’s Chief Data Scientist, Dr Janet Bastiman, and FINTRAIL’s Managing Director, James Nurse, came together to create the ultimate guide for implementing AI for financial crime compliance.
Here are their six recommendations to know if AI is right for your organisation and how to prepare for it:
1: Establish your readiness for AI
So, your company has had a conversation internally and decided to adopt AI into a wider anti-money laundering (AML) framework, or perhaps into one function, such as transaction monitoring.
Before embarking on your AI journey, you must first establish if your organisation is mature enough to adopt the technology. Conducting a maturity assessment as the first stage of the process reduces the risk of wasting resources on an unfeasible implementation project at a later stage.
A comprehensive maturity assessment involves evaluating your business, namely its people, data, and processes. This will allow stakeholders to identify strengths and areas for improvement, and accordingly prioritise what needs to be done to reach AI ‘readiness’.
Data maturity is the most important factor to evaluate when adopting AI solutions. Ensuring that your organisation holds enough historical data of sufficient quality for an AI model to draw upon is key to generating meaningful results.
When detecting customer-specific anomalies or changes in customer behaviours (as transaction monitoring and other AML solutions do) having the correct volume and quality of historical data is essential.
AI solutions will flag behaviours that are unusual compared to the data used to create the model, so without mature historical data, the system is unlikely to gain an accurate picture of customer behaviour and will produce more false positives, adding to, rather than reducing the ‘noise’.
TOP TIP: Your firm should aim to have historical data spanning double the recommended ‘minimum’ period required to successfully run an AI solution.
2: Assess the regulatory environment
Depending on where your organisation and its subsidiaries reside and the type of business you are involved in, there are rules in place to ensure your systems are compliant. These rules are particularly important when it comes to informing the use of AI.
Naturally, considerations need to be made for data protection, such as the EU’s General Data Protection Regulation (GDPR). Any process designed to process personal data is subject to the GDPR regime, specifically Article 5(1)a) which requires that data controllers constantly reassess the likely impact of their use of AI on individuals to ensure it does not produce biased outputs.
Also relevant to member states of the EU is the proposed AI act, which is concerned with the risk and explainability of AI systems, and prohibits the use of AI in ways that contradict EU values.
There also exist some specific regulations to govern the use of automated decision making. While several key jurisdictions have discussed the use of AI in the financial sector, only the EU and the UK have issued guidelines so far, and the EU is alone in having introduced any legislation or regulatory measures.
Other regulations globally are maturing that require AI decision outputs to be explainable by humans. For example, in Germany, BaFin expects firms to maintain human involvement in the interpretation and use of AI outputs to promote accountability, provide legal safeguards, or perform quality control.
If your organisation decides to extend coverage beyond traditional transaction monitoring models to include AI analytics, there are considerations to be made in order to meet regulatory requirements and to mitigate the risks of adverse consequences from decisions based on incorrect or misused models.
3: Don’t skip the risk assessment
The next stage should be to conduct an enterprise-wide financial crime risk assessment in line with your financial crime risk policy.
This assessment will provide an overview of the key financial crime risks to which your organisation is exposed, including information about emerging threats and any changes to the firm’s financial crime risk appetite. It will also inform you of the types of data required to manage those risks and any control procedures that you might consider to mitigate them.
Conducting a risk assessment at this stage will also inform your subsequent vendor selection process.
By obtaining a thorough understanding of your company’s financial crime threat landscape, you will be better informed of the issues you face from a risk perspective and which of those an AI solution could solve.
4: Identify relevant data points
Successful AI implementation requires managers to adopt an intelligent and networked approach towards financial crime that puts data analytics at its core. The risk assessment from the previous step should inform you of the data types required to manage the primary risks, so the next stage in the process is to locate and aggregate this data to train the AI model.
Not every firm is inherently well placed to collate customer and transaction data - not to mention data from external sources - necessary to generate a holistic picture of customer behaviour.
For many financial organisations, data sources are spread across the business and may be owned by teams in different divisions or geographies, or stored on different systems, causing data silos. This data can be difficult to access, particularly if the firm is burdened with inflexible legacy technology.
Data protection and data security considerations are also needed to prevent the exposure of sensitive customer data, potentially creating barriers to internal and cross-border data sharing.
You may also find that the data available to you doesn’t sufficiently illuminate the behaviours you identified in your risk analysis. If this is the case, a degree of investigation is required to understand where gaps in data exist and where you can source additional information.
TOP TIP: Whether you are using an AI-driven solution or a rules-based system, always ensure you understand what the priority data points are, i.e. those that are required to fundamentally drive useful results. For example, priority data for a transaction monitoring system includes historical data on the frequency, volume, currency, and direction of customers’ transactions.
5: Conduct thorough data assurance
Once you have identified the necessary data, the next steps are to validate it and provide assurance that this data is trustworthy and usable. Data quality and consistency must be assessed to iron out any formatting errors and fill any gaps. Data may be stored in different formats depending on how it was collected, so cross-referencing this information to make sure it can be merged is vital.
For example, restricted drop-down data fields maintain a certain pre-programmed format but may not encapsulate the nuances of the data, whereas free text inputs are subject to human error and formatting challenges which cannot be used directly into an AI model, and automated processing may introduce further issues. Know Your Customer (KYC) data is particularly difficult to map as it is often managed externally in company registries, by regulators, or by suppliers of important information like PEPs or sanctions data.
It is also important to check the auditability of the data, which involves assessing whether the data is fit for a given purpose.
It is critical at this stage of the implementation process to ascertain:
- If your data sources are valid and reliable
- If the data has already been processed in a way that could remove information
- How old the data is
- If the data needs to be refreshed
If the quality of the data feeding the AI is poor, it is unrealistic to expect meaningful and reliable results.
DATA ASSURANCE CHECKLIST:
- Relevancy - the data should meet the requirements for intended use.
- Accuracy - regardless of the area of risk and the data which is required to mitigate it, data accuracy is key.
- Completeness - the data should not have missing values.
- Recency - the data should be up to date, as data that is very old may no longer be relevant and could require a refresh, requiring input from wider team members or customers.
- Consistency - eradicating any formatting inconsistencies is vital for the AI system to accurately interpret the inputs.
6: Define your business operating model
The next stage is to define the objective of implementing AI within the broader financial crime compliance and business operating model, to ensure that the AI results have relevance and can feasibly be integrated into existing processes. It is important to consider the following:
- What information do financial crime teams need to mitigate risk and improve the effectiveness of the compliance programme?
- How will the AI system work with your “three lines of defence” business model?
- Where should responsibility for designing, implementing and assuring your AI model sit?
Each line of defence requires different outputs from a solution. For example, some teams will show more interest in reporting functionality, whereas others may require controls that help them to identify, assess and control the financial crime risks to the business. Not every solution will have functionality for each of these use cases.
These considerations should be used to define requirements for vendors before you go to market. Having a thorough understanding of the functionality required to fit within your operating model will narrow the list of potential AI vendors.
Ultimately, the product needs to enhance the effectiveness of the anti-financial crime function, so capturing the different teams’ requirements in the broader operating model is key.
We recently released a 12-step guide to AI implementation created by our Chief Data Scientist, Dr Janet Bastiman, and FINTRAIL’s Managing Director, James Nurse. This comprehensive resource addresses some of the most common challenges financial institutions face in this journey, and assesses the current regulatory landscape around the use of AI.
Improve your compliance processes with an award-winning solution
Get in touch to see how our intelligent platform can help your organisation transform its compliance; or request a demo to see it in action.
Photo by Shubham Dhage on Unsplash