By Dr Janet Bastiman
Chief Data Scientist, NapierAI
AI is on everyone’s lips, but how do compliance functions within financial institutions make sure they are ready to adopt AI for client screening processes, and understand the right regulatory requirements to comply with?
AI-powered risk scoring systems enables financial institutions to answer the crucial ‘why’ behind a flagged transaction or client, and with more accurate results. In a world where every data point matters, staying ahead of risks and understanding the nuances of customer profiles is essential.
But there is a need for institutions to take the right steps before implementing AI, to make sure it helps improve processes in the right way. Skipping to implementation is not the right path, and there are a few key considerations before making any decisions.
Step 1: Readiness and maturity assessment
Evaluating business processes is an important step to identify strengths, areas for improvement, and prioritise what needs to be done to reach AI readiness.
Financial institutions should identify whether or not they have as much information as possible to verify customers, so no criminal slips through the net due to a lack of, or the incorrect information. This means ensuring good data practices regarding storing and updating information, and in cases where external information is used, financial institutions should verify it to make sure it is correct.
Step 2: Regulatory environment assessment
Depending on where your organisation and its subsidiaries reside, and the type of business you are involved in, there are rules in place to ensure your systems are compliant. These rules are particularly important when it comes to informing the use of AI.
Data protection regulations are important for client screening, for example, the EU’s GDPR, which requires data controllers to constantly reassess the likely impact of their use of AI on individuals to ensure it does not produce biased outputs.
Additionally, The UK government recently published a pro-innovation regulatory framework for AI. The EU AI directive ensures there is a right to explanation of automated decisions where there is an impact on the individual, so it is important to have explainable AI to facilitate this.
Step 3: Risk assessment
This assessment will provide an overview of the key financial crime risks to which your organisation is exposed, including information about emerging threats and any changes to the firm’s financial crime risk appetite. It will also inform you of the types of data required to manage those risks and any control procedures that you might consider to mitigate them.
Step 4: Drilling down on the data
Successful AI implementation requires compliance teams to adopt an intelligent and networked approach towards financial crime that puts data analytics at its core. For many financial organisations, data sources are spread across the business, or across the chain of financial institutions. It may be owned by teams in different divisions or geographies, or stored on different systems, causing data silos. This data should be identified, but it can be difficult to access, particularly if the firm is burdened with inflexible legacy technology.
Once you have identified the necessary data, the next steps are to validate it and provide assurance that this data is trustworthy and usable.
Step 5: Business operating model definition
The next stage is to define the objective of implementing AI within the broader financial crime compliance and business operating model, to ensure that the AI results have relevance and can feasibly be integrated into existing processes.
Step 6: Market analysis and vendor selection
Now, conducting market analysis involves assessing the RegTech ecosystem to identify the types of solutions that are available to you. Agile solution partners with modern and scalable architectures and no-code features help analysts to use AI and see the explanations behind it.
By considering the above, FIs can be confident in the knowledge that the go-live and implementation of AI for client screening will skyrocket FCC processes.
To learn more about the pragmatic implementation of AI to transform your AML strategies, sign up for AML Intelligence’s webinar on 17th April.
You can sign up HERE:
THE AUTHOR: Dr Janet Bastiman is Chair of the Royal Statistical Society’s Data Science and AI Section and member of FCA’s newly created Synthetic Data Expert Group, Janet started coding in 1984 and discovered a passion for technology. She holds multiple degrees and a PhD in Computational Neuroscience. Janet has helped both start-ups and established businesses implement and improve their AI offering prior to applying her expertise as Chief Data Scientist at Napier. Janet regularly speaks at conferences worldwide on topics in AI including explainability, testing, efficiency, and ethics.
Share this on:
Follow us on: