By Shagufta Siddiqa
THE EU AI Act is the first comprehensive regulation which aims at governing artificial intelligence use across Europe.
Here, we assess the potential impact on Model Risk Management (MRM) within the scope of Financial Analytics in the Compliance domain.
This landmark legislation classifies AI systems by risk levels and introduces a plethora of strict compliance requirements to ensure transparency, safety, and fairness.
As the foundation of this AI Act is a risk-based approach in terms of classifying AI systems, there are four different categories depending on the associated risk of the AI systems/models. These are – minimal risk/low risk, limited risk, high risk and unacceptable risk.
A set of six ‘general principles’ are to be the stepping stone for the development and usage of any AI model. These are:
- Human agency and oversight
- Technical robustness and safety
- Privacy and data governance
- Transparency
- Diversity
- Non-discrimination and fairness
- Social and environmental well-being.
So how does this relate to AML and why is this important for financial institutions?
It is because through all of this, we can see that the aim of this act is to ensure financial firms have robust and accurate risk management frameworks in today’s AI world.
Initially, compliance models used to have rule-based structure, where people would manually apply rules using various levels of risk tolerance to weigh up threats.
In the last five years, compliance analytics has witnessed a massive shift from rule-based structure to machine-learning models.
This heavy influx of AI models called for the need to set up quantitative standards applicable to machine learning models as part of Model Risk Management (MRM) tools in the Compliance domain.
MRM is the framework to assess and manage risk systematically for a financial
institution.
Although the major banks have an on-going MRM framework in place; the present framework requires certain upgrades in terms of data validation and model effectiveness testing.
This would in turn ensure additional aspects of technical robustness, privacy and data governance mentioned in the stringent set of standards of EU AI Act.
Banks will also need to incorporate ongoing risk assessments for dynamic list of Financial Crime risk typologies and incident reporting frameworks.
These will have to not only align with the broader scope of model risk management but will demand a more exhaustive approach towards model validation, data quality, model implementation tests and explainability standards.
Additionally, the act mandates fines for non-compliance with penalties reaching up to a certain percentage of annual revenue for severe violations, encouraging banks to align AI usage with established ethical and regulatory guidelines.
Although some ambiguity remains regarding the specific application of high-risk criteria to finance, the AI Act is expected to drive a more cautious and standardized approach to AI within EU banks, influencing similar policies globally as banks strengthen compliance frameworks for AI model risk management.
As we live in a world of information and misinformation, AI regulation is one of fundamental measures for ensuring the additional oversight for a field like compliance analytics.
- Shagufta Siddiqa is Assistant Vice President in Model Validation at Barclays India.