By Dan Byrne for AMLi
AN EXPERT IN FinCrime compliance has warned that the use of artificial intelligence in AML needs to be backed up by a robust understanding of how it works.
Max Lerner, Managing Director and Global Head Compliance for Sanctions, Anti-Bribery and Corruption at State Street Bank has warned that adopting AI was just one part of a long process, and that regulators will want assurances that a firm understands AI too.
“Regulators will want to make sure you haven’t just pushed a button on a magic box,” Lerner told a Silent Eight webinar Tuesday. “You’re want to make sure the humans understand what the magic is actually performing.”
“How does it work? What controls are in place? Does it actually produce the results you want it to? How are you sure it will continue to do that if you just let it run?”
Lerner suggested that firms needed to be confident in their answers to these questions, and that until they were, regulators would be “hard-pressed” to embrace the potential of AI as a new tool fighting dirty money.
“If you go in thinking that you’re getting a magic box which just solves everything, that will never actually solve the problem,” he summarised.
Artificial intelligence is rapidly becoming a more common weapon in firms’ AML arsenals – spurred in part by the mass migration to digital systems during the pandemic.
Indeed, in late 2020 Rob Leslie – CEO of digital identity solutions firm Sedicii – told AMLintelligence that 2021 would see a “significant increase” in electronic work over paper-based equivalents.
Leslie, who founded Sedicii in 2013, said that this was being helped by the development of things like Privacy Enhancing Technologies – allowing more data to be shared without risking confidentially.
But the vast array of potential benefits that come with automated AML have been contrasted with the worry that having too little human input – or even understanding – may make the innovation useless.
Lerner’s comments were echoed by Silent Eight Senior Vice President John O’Neill at the Tuesday webinar, who suggested that the ‘magic box’ approach was incompatible with FinCrime.
“AI needs to have strict human oversight,” he told the webinar. “And in fact, it should make that oversight substantially easier than it is today.”
“At no point should your AI be making changes to your model without multiple levels of human supervision. And a good AI can enable that.”
O’Neill listed a number of things that firms should expect if they are implementing a fit-for-purpose AI system:
It should be a system that brings “penetrating insights” into current processes, efficiency, and where bottlenecks occur, he said.
It should also give a firm the power to “back-test” ideas for a more rounded assessment if it wants to change policy. And, he added, it should make governance “significantly easier, not harder.”
O’Neill added caution to the common worry within firms and regulators that AI was not the ‘silver bullet’ and that bad data fed into it would result in bad data coming out.
While he acknowledged that this was true, he also stressed that a good AI can take what many might consider “bad data” – such as incorrectly formatted names and addresses – recognise it, and fix it.
This, he said, was a process that would only get better as the machine learns more and more.
Share this on:
Follow us on: