top of page

When AI-driven decisions are acceptable to clients and Risk & Compliance



Artificial Intelligence (AI) algorithms are commonly used in many sectors. Yet, many financial services companies still apply AI very limited. Especially for those decisions that could very significantly impact the business results. Uncertainty about the acceptance of its uses by internal and external stakeholders is a main reason. What is this uncertainty and what handles can help? We offer a client-centric framework for such handles and illustrate these with one of our projects. The framework is composed from the perspective of the end customer, but also helps specifically to meet internal stakeholders as well.


Potential for more value with less work Specialists like underwriters, risk experts and insurance claim handlers spent a lot of time on repetitive assessments and decisions. When these points in the business process can be automated the specialists can focus on more challenging activities and provide the clients with much added value. Often rules-based decision models are composed first, but these have their limitations. From experience we know that decision models augmented with AI can further increase the straight-through-processing ratio and make the process more accurate and consistent.


Two parties are cause for uncertainty; clients and Risk & Compliance Those AI models that can most contribute to business results can also potentially most impact clients. An AI model that wrongly assesses credit worthiness or insurance risk can of course cause much more harm than a model that wrongly assesses online click behavior or cross-sell opportunity. These high-stakes models are therefore very much in the attention of several internal and external stakeholders such as; regulators, risk officers, clients and client representatives.


Because of this Risk & Compliance teams often place strict demands on model validation and monitoring. The more complex and more impactful the model, the more difficult it is to meet those demands. Many organisations also do not have much experience with this. Next to that, financial services providers are uncertain whether their clients and intermediaries will accept the outcomes of these strongly differentiating, high-impact models. These uncertainties in both front office and back office result in resistance and slow down innovation.


Client acceptance as the leading perspective The framework below provides guidance to get AI accepted by both internal and external stakeholders. It is composed from the client perspective, so it appeals more and provides more energy compared to only thinking in terms of rules and obligations.

In short it comes down to this: As a client you want to understand how a (automated) decision was reached; you want to feel it’s a decision you can live with; and you want to be able to work with it. We elaborated on these points.


1. The outcome must be explainable You must be told what data was actually used and how that data led to the outcome. But to be really considered as explainable the explanation of the outcome must also be perceived as ‘logical’. In other words, the explanations must align to our understanding of how the world works. Of course, the model must also be consistent, such that outcomes and explanations for similar cases do not differ much.


2. The explanation must be acceptable The data used must not be perceived a violation of privacy and so too for the insights obtained from it (using a predicted possible pregnancy, or upcoming company ownership transfer, deducted from a change in transaction patterns is not appreciated by many clients). Next to this, the model’s outcomes need to be unbiased towards vulnerable groups, or towards sensitive characteristics such as gender or ethnicity of individuals or in the composition in a labor force.


3. With a sense of control and benefit for the client To provide clients with a sense of satisfaction, even in the case of a negative outcome, they must experience a sense of control. This requires the possibility of human intervention in the decision process and that the AI model is predictably influenceable. This last requirement allows explaining the client what she can do herself to obtain a better outcome. Of course, it is important that ‘good behavior’ subsequently does indeed lead to better scores and lower interests and rates as a result.


When a decision model can realize the perceptions above, there does not need to be much uncertainty about whether clients will accept the application of AI. Naturally, the service provider must still be able to communicate all of this well enough so that clients are informed complete and in such a way that the above perceptions are indeed effectively transferred.


Killing two birds with “one” stone Underneath the three client perceptions are quite a few requirements. However, we do have the technology and methodology to realize these. The good news is also that once you realize the three client perceptions you will be able to cover almost anything that Risk & Compliance may need to demand. Because in order to make AI models explainable and acceptable to clients and to have control over the outcomes you must apply specific methods and reach a certain level of control and transparency. With it, you can also meet the Risk & Compliance demands.


So the next time you contemplate a solution to apply AI in high-impact decisions only three questions are relevant: “Can we explain this to the client?”, “Will he/she find it acceptable?” and “Will there be a certain level of control for the client over the outcome?”.


This will reduce much uncertainty and many puzzle pieces will fall into place. The client perspective energizes most professionals more than the necessity of following difficult rules.

We are open to discussing all sorts of matters that involve data science and AI. Especially those that touches upon organisational aspects. Feel free to contact us for a cup of coffee!


 

Interested in what IG&H can do for you? Contact us!



bottom of page