Improving the fight against financial crime with artificial intelligence

Improving the fight against financial crime with artificial intelligence

Teams fighting financial crime within financial institutions face a daunting task. Along with their task of monitoring and evaluating billions of transactions with precision, they must at the same time maintain a good customer experience for customers and comply with growing and ever-changing regulatory requirements.

To meet these demands, financial crime teams encounter a range of operational issues, from staffing shortages and inefficient processes to workflow bottlenecks and technology gaps. To address these and other challenges, financial institutions should embrace artificial intelligence (AI), in the form of machine learning to be more specific – say Ruben Velstra and Michel Witte of financial services consultancy Delta Capita .

“Automation presents several opportunities for financial crime functions,” says Velstra. “Let’s take the anti-money laundering segment as an example. AI-powered solutions can improve productivity by identifying high-risk situations that require human interaction; and detect hidden risks between siled processes.

Witte gives another example: “With the use of artificial intelligence, solutions can monitor customer transactions throughout their lifecycle and turn them into behavioral insights. This allows financial crime teams to to access customer ‘profiles’ at lightning speed, instead of this process being done manually.”

That’s easier said than done, though. In practice, the adoption of artificial intelligence is still in its infancy. “It’s still a niche market, mostly driven by true tech enthusiasts,” says Witte. “It’s mainly due to the challenges the teams encounter along the way.”

The greatest difficulties are divided according to the duo into three groups: skills; data quality and availability; and transparency and understanding.


“For the adoption of AI, financial institutions tend to focus on hiring specialists in data science and internal ratings-based models who are not always experienced in using AI. IA,” notes Witte. “But AI skills should no longer be the exclusive domain of number crunchers and data wizards. More and more people can use analytics without resorting to complex techniques. Algorithms are increasingly generated automatically, which changes the expertise required.

To solve this problem, companies need to strengthen the skills of their staff, who focus on understanding and interpreting the results. When it comes to recruiting to help fill these gaps faster, it can be very difficult to find people who can use technology effectively. However, it is essential that companies have mechanisms in place to complement their pre-existing talent, as high-performing project teams must include both technical expertise and financial crime expertise to operate effectively.

Velstra adds, “Managing alerts or signals from AI systems also requires different perspectives for operational analysts. Rules-based systems have mostly black-and-white decision-making processes. Using AI to analyze customer behavior requires a more proactive, risk-based, customer-centric approach – and more professional judgment.

Data quality and availability

In computing, garbage in, garbage out (GIGO) is the concept that bad or nonsensical input data produces poor quality output. So even if institutions have market-leading AI facilities, they need to ensure a constant flow of credible data to feed it. But data standards often fall short, according to Witte.

“Financial institutions struggle to keep all customer information up to date,” he continues. “And customer data is often duplicated in internal systems or stored in silos. For example, if information has been configured based on siled product lines, this can make it difficult to analyze data automatically and holistically.


Velstra says institutions need to build their capacity for data interpretation and transparency. Making machines efficiently process quality data is useless if it cannot be understood and interpreted by the human capital of a company. At the same time, without this understanding, businesses can end up falling foul of regulatory changes, putting them in a difficult position.

Continuing, Velstra says, “The use of AI in financial crime is always controversial, especially among regulators. It is therefore important to emphasize that the data and analysis models are used in an ethical manner. Institutions should be aware that: data may contain bias; they will have to reconstruct automatic decision-making so that listeners can see it; and they should implement proper manual safeguards and rigorous testing in the meantime.

Citing a recent example, Velstra cites a German bank, which inadvertently blocked hundreds of customer accounts after tightening its automatic checks. The following reputational backlash highlights the risks if the above points are not adequately addressed, while highlighting the importance of aligning with internal stakeholders, such as compliance and audit, to support AI usage goals and include it in company policies.

How data and technology can help

According to Velstra and Witte, the first step in overcoming all of these challenges is to recognize that data is both a problem and an opportunity. The underlying issues should not be approached solely from the angle of financial crime, but with a much broader scope.

“Business initiatives, for example, can benefit from customer-centric data structures,” Witte develops. “Institutions should update information transparently by connecting to official public sources, such as chambers of commerce; and regularly ask customers to validate their data.

Beyond that, Velstra warns that institutions should always “think carefully about making or buying AI technology.” While it may be tempting to build your own AI system to eliminate middlemen, not all institutions are able to assemble specialized teams to take on such a project. In this case, “efficient and dedicated third-party readers are available” and should be used.

Continuing, Velstra explains, “Financial firms that are confident in their in-house capabilities need to find the right balance between continuous experimentation and frequent production of relevant uses of AI.”

Demonstration examples include: using AI in transaction monitoring to triage or prioritize alerts from rule-based scenarios; anomaly detection – generating alerts for a specific risk that existing rules cannot easily detect; and increase the effectiveness of name matching in sanction screening.

“As we have pointed out, artificial intelligence is not easy to adopt,” concludes Witte. “But being aware of the challenges and developing a solid strategy will open up a plethora of possibilities. Now is the time to start reaping the rewards.

Similar Posts

Leave a Reply

Your email address will not be published.