The speed at which Artificial Intelligence (AI) is being adopted has brought about a revolution in the way organisations approach financial crime compliance. By analysing vast amounts of data in real-time, AI powered compliance systems can identify potential risks and flag suspicious activity accurately and efficiently, thereby helping organisations remain one step ahead of fraudsters and money launderers. However, to fully reap the benefits of AI - or any technology for that matter – it's essential that it’s used in an ethical and responsible manner, enhancing rather than inhibiting the wellbeing of the stakeholders affected by its outcomes.
According to the Alan Turing Institute’s ‘Guide for the responsible design and implementation of AI systems in the public sector’- AI ethics is a set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies.
AI generated outcomes and results are only as good as the data used to train and test the algorithmic functions. Without balanced data, AI systems can produce flawed and biased results, ultimately impeding their ability to operate in a justifiable manner. When it comes to financial crime compliance, there are several significant concerns regarding the fair and ethical use of AI, including:
Bias and Discrimination:
The data samples used to train and test algorithmic systems can often be inadequate and unrepresentative of the populations from which they are drawing inferences. As a result, biased and discriminatory outcomes are possible because they’re based on flawed data.
Biased features, metrics, and analytic structures for the models that enable data mining can reproduce, reinforce, and amplify the patterns of marginalisation, inequality, and discrimination. For example, take irregular payments: individuals working a zero-hours contract won’t know from one week to the next how much money they are bringing in. Their bank accounts will show many irregular payments in and out, mimicking genuinely suspicious activity, and therefore being flagged by an AI algorithm as suspicious.
Non Accountability:
AI systems may automate some human cognitive functions, but it’s more of a challenge to ascribe accountability to a system for its algorithmically generated outcomes. The complex and distributed nature of the design, production and implementation processes of AI systems can make it difficult to pinpoint responsible parties. This poses an issue for financial institutions using AI systems for compliance and monitoring purposes, as they must ensure accountability and transparency in the outcomes generated. This is where models such as human-in-the-loop come into the picture, overlaying human intelligence onto AI in the different stages of setting up and implementing the tuning and testing of algorithms.
Unreliable, Unsafe, or Poor-Quality Outcomes
Poor management of data, inadequate design and production methods, and questionable deployment practices can all contribute to the creation and distribution of AI systems that produce unreliable, unsafe, or substandard outcomes.
Reid Blackman in his book ‘Ethical Machines’ recommends following a compromise strategy of "AI for not bad," which focuses on avoiding ethical pitfalls in the pursuit of attaining the results, rather than an idealistic strategy of "AI for good". While it's essential to focus on the potential benefits of the machine learning models we in financial crime compliance create, it's equally important to consider what can go wrong with these models and how they can be misused.
The biggest issue in financial crime compliance is ensuring that the impact of poor AI generated outcomes don’t disadvantage people who are in lower socioeconomic groups and whose transaction patterns may not fit what financial institutions consider ‘normal’. For example, a person wrongly flagged as suspicious because of his ethnicity/racial/socio-economic background might have his accounts suspended because his behaviour ‘fits’ money laundering typologies as the result of biased training data.
Building ethical machines to fight financial crime
The Alan Turing Institute’s guidelines specify how to design and deliver sufficiently interpretable AI systems. For high-stake sectors like financial crime compliance, it is crucial to assess the stakeholders who are likely to be most vulnerable to the outcomes of the systems you’re building, and fine-tune strategies to accommodate their needs. This is why thought and experience diversity is critical to the teams building these systems.
A multidisciplinary team with a deeply ingrained culture of responsibility may be better at adopting governance architectures ethically than a homogenous team. Using transparent AI systems that thoroughly weigh the impacts and risks of models will help the team put adequate forethought into how the outcomes of the system’s decisions, behaviours, or problem-solving tasks can be optimised.
Such practices for building trust and confidence in AI help ensure that the benefits of AI systems designed to fight financial crime outweigh its shortcomings. Systems must become deserving of public trust by guaranteeing as far as possible the safety, accuracy, reliability, security, and robustness of their solutions.
Ensuring ethical considerations are incorporated into AI is crucial for AML compliance teams at financial institutions. Our latest whitepaper- “AI Regulations: Global Approaches to Combat Money Laundering” gives compliance teams an insight into the regulatory direction of travel, and ensures they have the right governance in place to implement AI ethically. Get your copy here
Improve your compliance processes with an award-winning solution
Get in touch to see how our intelligent platform can help your organisation transform its compliance; or request a demo to see it in action.
Photo by Steve Johnson on Unsplash