Skip to main content

In September, the EBA published a follow-up report presenting the feedback received during the industry consultation on machine learning (ML) for internal ratings-based (IRB) models. Building upon their November 2021 discussion paper, the EBA offers a comprehensive view of how ML techniques are being harnessed in IRB models, and the challenges and opportunities that lie ahead for practitioners. Keep reading to dive deeper into the insights provided by the EBA to understand this evolving landscape better. 

New EBA report on ML for IRB Models

Previously, ADC wrote an article exploring opportunities of machine learning (ML) in the regulatory landscape, emphasising the potential of ML in revolutionising credit risk. The main takeaway was the need for a harmonised approach, ensuring that the advancements in ML are balanced with ethical considerations and regulatory compliance. 

The EBA’s latest report expands upon these earlier observations. While we highlighted the promise of ML in credit risk, the EBA delves deeper into both the complexities and the potential benefits. Specifically, ML excels in modelling non-linear relationships and its ability to amplify a model’s discriminatory power.

Balancing ML Techniques with Regulatory Standards

The enhanced capabilities of ML, as affirmed by industry feedback, underscores its promise of offering more precise risk evaluations and leveraging expansive datasets for superior decision-making. But as ML becomes more integrated into the financial system, it is crucial to understand how it interacts with other regulations. 

Balancing ML techniques with regulatory standards like the GDPR and the AI Act is a complex endeavour. With data and computing power at an all-time high, ML models are not only becoming a staple in credit risk but are also paving the way for more efficient and insightful financial systems. Let us dive deeper into the insights provided by the EBA to understand this evolving landscape better. 

ML Techniques in Credit Risk Models

The EBA’s report offers a deep dive into the intricacies of ML in credit risk assessment. The feedback they received during the consultation has affirmed that the most significant challenges encountered during the development and validation phases revolve around overfitting, the proficiency of developers and validators, the complexities of interpreting results, and the categorisation of model changes. 

In the subsequent sections, we will dive deeper into each of these critical aspects. Below is a breakdown of these complexities, the tools being used to address them, and some of the benefits.  

1. Statistical Issues

The EBA report points out the risk of overfitting in ML models. This is when a model, perhaps trained using a technique like deep learning, excels with its training data but stumbles with new data. 

Techniques like regularisation are employed to counteract this, leading to more robust models that generalise better to new, unseen data. Statistical issues are recognised by the practitioners within the financial industry, given the feedback from respondents on the consultation paper. 

2. Human Skills

ML is as much about the technology as the talent behind it. Techniques like support vector machines or neural networks can be complex, demanding, and specialised knowledge. Both the CRCU and the validation function grapple with these intricacies. This raises questions about the balance between machine-driven decisions and human oversight. Ensemble methods, like random forests, are often employed to combine multiple models for better accuracy, providing a more holistic view and ensuring that decisions are not only based on a single model’s perspective. 

On top of that, the feedback from financial institutions underscored the essential requirement to maintain a high level of expertise and the possibility of human intervention throughout the entire process of developing and implementing the model, as per the CRR requirements. 

3. Explainability

As ML models, especially those like convolutional neural networks, become intricate, explaining their decisions becomes a challenge. Tools like Shapley values come into play, helping to break down and attribute the contribution of each feature in the model’s decision. This enhances transparency, ensuring stakeholders understand the ‘why’ behind a model’s decision, fostering trust. However, it is noted that it remains a key challenge for banks to find the right balance between improving model performance versus creating additional complexity and decreasing interpretability. 

4. Categorisation of model changes

IRB model changes need to be assessed in line with prudential requirements which may lead to prior approval by the supervisor before implementing the changes, on the other hand model changes may not be split into smaller changes or extensions of lower materialityHarnessing the power of ML in credit risk assessment requires a keen understanding of these complexities and the tools available. By doing so, we can effectively navigate its challenges and reap its benefits. 

Credit Risk and the AI Act

While ML techniques in credit risk models offer promising advancements, they also bring forth concerns that extend beyond just regulatory considerations. 

The AI Act classifies the evaluation of an individual’s creditworthiness (CWA) and credit scoring as high-risk. This is because such evaluations can significantly influence a person’s access to vital financial resources. The AI Act’s scope focuses on systems that might jeopardise an individual’s access to these resources.  

The EBA suggests that clarity is needed. They believe the AI Act should mainly target systems used for CWA and credit scoring when initiating a loan or related financial services. This means the Act would not directly cover other credit processes, like IRB models used for calculating capital requirements.   

The AI Act's Effects on IRB Models

While the AI Act might not directly target IRB models, its influence cannot be ignored. The Act’s indirect effects on IRB models could be substantial through the use-test requirements. These requirements mandate that financial institutions ensure their internal ratings, default and loss estimates are central to their risk management, decision-making, and credit approval processes.  

Consequently, even if the AI Act does not directly apply to IRB models, its requirements on creditworthiness evaluation at loan origination could still impact IRB models through the prudential use-test requirement. As ML becomes more integrated into the financial system, understanding these nuances is crucial.  

Navigating the Future of Credit Risk with Machine Learning

In summary, the EBA report paints a comprehensive picture of the use of ML techniques within IRB models, while also shedding light on potential hurdles in their implementation. 

These obstacles stem from practical considerations, insights drawn from industry experiences, and the ongoing evolution of regulatory frameworks in this domain. However, by addressing these complexities and harnessing the power of ML, the financial sector stands poised to achieve more accurate risk assessments, streamlined processes, and a more transparent and ethical approach to credit decision-making. 

Continue the conversation

Would you like to know more about the role ML Models can play within the area of IRB capital models? Or what ADC can do for you? Feel free to reach out to Jaap van Elsäcker (Project Lead, Financial Services) for a chat.

Send message
Jaap van Elsäcker

What stage is your organisation in on its data-driven journey?

Discover your data maturity stage. Take our Data Maturity Assessment to find out and gain valuable insights into your organisation’s data practices.

Read more about the assessment
Gallery of ADC

Leave a Reply