Earlier this year I made the case for making artificial intelligence (AI) simple for analysts to use. Now I want to build on this blog by exploring why explainability in AI is so important and necessary.
What is explainable AI?
First, let’s recap on what explainable AI is.
Explainable AI is transparent in the way it works. The UK’s Financial Conduct Authority (FCA) defines AI transparency as “stakeholders having access to relevant information about a given AI system.”
All users of AI-enhanced systems should be able to access the information they need to understand its insights.
The Royal Society emphasises the most useful explainable AI systems will provide different users different forms of information in different contexts. For example, technical information would be provided to a developer while accessible information would be provided to a lay-user.
Why is explainability important?
AI hasn’t always been as explainable as you might expect. Some of today’s AI tools can be highly complex, if not outright opaque. Workings of complex statistical pattern recognition algorithms, for example, can become too difficult to interpret and so-called ‘black box’ models can be too complicated for even expert users to fully understand.
This is problematic for two reasons:
- system usability
- regulatory compliance.
How system usability is driving the need for AI explainability
While explainable AI is still in its early stages of adoption, according to Gartner, by 2025 30 percent of government and large enterprise contracts for the purchase of AI products and services will require the use of explainable and ethical AI.
Explainability is absolutely necessary for several reasons:
- Helps analysts to understand system outputs simply and quickly. If analysts understand how the system works, they can make informed decisions efficiently.
- Helps overcome false positives. Explainability can provide recommendations and spot anomalies for analysts to investigate, providing a layer that would otherwise have to be done manually.
- Gives confidence in the AI diagnosis by providing the clarity of why. Sometimes AI can give an output that’s correct or right but for the wrong reasons. Likewise, AI can and does make mistakes. Explainability means it’s possible to understand why a mistake was made and even train the system to stop it from happening again.
- Reduces the need to hire highly skilled data scientists. Since explainability enables analysts to understand decisions made by AI, companies can avoid the extra expense associated with hiring data scientists. After all, what’s the use of a highly sophisticated set of algorithms if you need to employ an army of data scientists to interpret the outputs?
- Encourages AI adoption and acceptance, since trust through understanding is essential to facilitate uptake.
Regulatory compliance underpins the need for AI explainability
Business use cases aside, there are very important regulatory drivers for explainability too.
While the regulatory environment for AI is still in its infancy, the regulatory momentum is building and governments in many major economies have expressed varying levels of interest in introducing more robust regimes for regulating the use of use of AI, with explainability an important part of this. (Whitehead, D. Risk & Compliance magazine. April-June 2021. AI and algorithms: The risks of our automated world.)
Regulators around the world, such as the UK’s Information Commissioner (ICO), European Banking Authority (EBA) and the Monetary Authority of Singapore (MAS) emphasise the importance of individuals subject to AI-automated decisions understanding why a particular conclusion was reached by the algorithm. This driver for explainability provides some overlap with the General Data Protection Regulation (GDPR) and while it is a very valid consideration in certain circumstances, it is only part of the story.
Explainability for the benefit of the user of the system is very important too, especially when it comes to detecting suspicious activity.
“It is the responsibility of supervised institutions to ensure the explainability and traceability of big data artificial intelligence-based decisions. At least some insight can be gained into how models work and the reasons behind decisions… For this reason, supervisory authorities will not accept any models presented as an unexplainable black box.” BaFin
The European Commission recently published the first draft of its Artificial Intelligence Regulation which stipulating requirements around AI and explainability.
As a regulatory technology provider, at Napier we wholeheartedly agree that “a certain degree of transparency should be required... Users should be able to interpret the system output and use it appropriately.”
Final thoughts
Explainable AI systems are really important but it is ultimately up to the human to decide what to do with the anomaly. Explainable AI simply gives humans the what and the why an activity has been flagged.
Book a demo to see just how simple AI is
The best way to see how simple AI is, is to see it for yourself. For information, advice or to book a demo on any Napier product, please get in touch with our expert team.