At this month’s Global RegTech Summit our Chief Data Scientist, Dr Janet Bastiman, was delighted to be part of the panel discussion: New technologies (AI, ML and Blockchain) to solve regulatory problems.
Watch the replay of their discussion
The panel comprised:
- Caroline Pham - Managing Director, Global Regulatory Strategy & Policy - Citigroup
- Matthew Homer - Executive Deputy Superintendent, Research & Innovation - New York State Department of Financial Services
- David Lillis - Principal Investigator - CeADAR, University College Dublin
- Dr Janet Bastiman - Chief Data Scientist - Napier
- Astrid Freier (moderator) - Partner - Early Fintech Ventures
Here’s a summary of the session in three key points:
1. Regulatory shifts are supporting artificial intelligence adoption
From a regulatory perspective, the 2010s were really significant in shifting how regulators supervised the market. A broader understanding that regulators needed to move towards a more automated way of supervising became clear.
Today, regulators recognise they need to transform alongside industry because ultimately, businesses can’t move forward until regulators do. Regulators are now clearer about the use of artificial intelligence (AI) and as a result, the resistance and desire to keep doing things the old way is no longer there. Many regulators are now using sandboxes to support the adoption of AI.
It’s really important for regulators and industry to work together towards a more automated way of supervising. Regulators and regulated entities need to go on this journey together. The superior control provided by AI is expected to improve regulator relationships.
2. Artificial intelligence must solve business problems
10 years ago, the capability to apply AI to business problems wasn’t there. Today this is no longer the case. Whatever tech you put in place, it must solve your business problems. You have to work with your vendor to ensure they understand your problems, data and pain points.
You shouldn’t look to superficially implement any technology just to tick a box from a regulatory perspective.
It’s important to recognise that regtech enables companies to work smarter not harder but the deployment of artificial intelligence and machine learning takes time, not least due to the need to develop the necessary infrastructure to deploy it.
3. Building trust in artificial intelligence is key
There has been some resistance to the application of AI to business problems because of the absence of explainability. Historically, AI has not been aimed at the person in the compliance function. The good news is that there has been a lot of work going on to explain AI “opaque boxes”.
Building trust in AI is now key and this begins with establishing an ethical framework, so everyone understands what you’re doing with AI and its transparency.
It’s important to recognise there’s a big difference between AI making decisions or simply informing them. AI should be introduced carefully following the latter. AI should be supplemented by human intelligence and deployed with clear explainability for all users to understand the reasoning behind its findings and outputs. New EU regulations make explainability a requirement for high risk applications.
Outcome tracking, where the outputs of new AI models are compared with those of simultaneously run old models, can be really helpful for building trust not just externally but with regulators too.
The tech industry is dealing with managing vast amounts of data but we are now at the point of having the data organisation, digitalisation and explainability in place to support the adoption of AI.
Be the first to know about Napier’s events
Sign up to our latest industry insight to be the first to know what’s going on in and around the world of Napier and compliance. Or follow us on LinkedIn and Twitter.