Financial Risk Management and Explainable Trustworthy Responsible AI

23 Pages Posted: 8 Jul 2021 Last revised: 27 Dec 2021

Date Written: June 25, 2021

Abstract

This perspective paper is based on several sessions by the members of the Round Table AI at FIRM , with input from a number of external and international speakers. Its particular focus lies on the management of the model risk of productive models in banks and other financial institutions. The models in view range from simple rules-based approaches to Artificial Intelligence (AI) or Machine learning (ML) models with a high level of sophistication.

The typical applications of those models are related to predictions and decision making around the value chain of credit risk (including accounting side under IFRS9 or related national GAAP approaches), insurance risk or other financial risk types. We expect more models of higher complexity in the space of anti money laundering, fraud detection and transaction monitoring as well as a rise of AI/ML models as alternatives to current methods in solving some of the more intricate stochastic differential equations needed for the pricing and/or valuation of derivatives. The same type of model is also successful in areas unrelated to risk management, such as sales optimization, customer lifetime value considerations, robo-advisory and other fields of applications.

The paper makes reference to recent related publications from central banks, financial supervisors and regulators as well as by other relevant sources and working groups. It aims to give practical advice for establishing a risk-based governance and test framework for the mentioned model types and also discusses the use of recent technologies, approaches and platforms to support the establishment of responsible, trustworthy, explainable, auditable and manageable AI/ML in production. In view of the recent EU publication on AI (see European Commission 2021), also referred to as the EU Artificial Intelligence Act (AIA), we also see a certain added value for this paper as an instigator of further thinking outside of the financial services sector, in particular where “High Risk” models according to the mentioned EU consultation are concerned.

Our key takeaways are:

1. There need to be general principles, requirements and tests to control model risk and fitness-for-purpose for each model. Particularly because AI is not a fixed category, we are talking about a spectrum of mathematical models of varying complexity, of which gradually more and more complex ones are becoming feasible. With regards to this fact, the mentioned governance elements (principles, requirements, tests to control model risk etc.) should hinge upon models’ respective purposes, influence on human lives and business impact, rather than upon model design or complexity. To satisfy these requirements of course, special tests will be necessary for more complex or even dynamic models. This holds true especially for the implementation of those models and their utilization in a scaling enterprise production environment.

2. To this end, it will be necessary and useful to combine the expertise and approaches of classical risk management and governance with those of data science and AI knowledge.

3. Many aspects of AI governance, algorithmic auditing and risk management of AI systems can be addressed with technology and computing platforms. In fact, an entire industry is about to emerge in this area. Many of the necessary techniques essentially consist of the use of somewhat less complex and more transparent models in their own right, with associated cost for maintenance and operation, and with inherent (more indirect and smaller) risks to operate them. Hence, there will always be residual risk and consequentially a need for human oversight. The level of residual risk should be covered via OpRisk Management (IT risk, mal-decisioning risk, reputational risk) and by AI incident management or AI model insurance.

4. Explainability, interpretability and transparency of models, data and decision making will be key to even enable an appropriate possibility to manage remaining model risks (“Explaining Explainable AI”). All three need to be directed towards the internal stakeholders of financial institutions, but – depending on model purpose – also towards the outside world, particularly to clients/consumers and supervisors. Each stakeholder needs to be informed about the model aspects in a different and specific way. There are technologies and experts to support interfacing the different domains involved.

5. One particular aspect of the “Explainable AI” agenda is to enable the fairness of AI decision making or decision support from a societal perspective (linked to the ESG agenda). The associated fairness considerations, starting from the need to explicitly define a notion of fairness and enable its implementation and ongoing validation, are by no means exclusive to AI modelling techniques. They pertain to classical decision making to the same extent; however, due to their lack of innate transparency, the cost of fairness will be higher for AI models. This should be taken into account in the business decisions around the choice of model design.

JEL Classification: G21; G32; O33

Suggested Citation

Fritz-Morgenthal, Sebastian and Hein, Bernhard and Papenbrock, Jochen, Financial Risk Management and Explainable Trustworthy Responsible AI (June 25, 2021). Available at SSRN: https://ssrn.com/abstract=3873768 or http://dx.doi.org/10.2139/ssrn.3873768

Sebastian Fritz-Morgenthal

Bain & Company

Two Copley Place
Boston, MA 02118
United States

Bernhard Hein

Ernst & Young

710 Bausch and Lomb Pl
Rochester, NY 14604
United States

Jochen Papenbrock (Contact Author)

NVIDIA GmbH ( email )

Germany
+49-(0)1741435555 (Phone)

HOME PAGE: http://www.nvidia.com/en-us/industries/finance/

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
557
Abstract Views
2,203
Rank
91,186
PlumX Metrics