When AI-driven decisions are acceptable to clients ànd Risk & Compliance

By Data science, News

Artificial Intelligence (AI) algorithms are commonly used in many sectors. Yet, many financial services companies still apply AI very limited. Especially for those decisions that could very significantly impact the business results. Uncertainty about the acceptance of its uses by internal and external stakeholders is a main reason. What is this uncertainty and what handles can help? We offer a client-centric framework for such handles and illustrate these with one of our projects. The framework is composed from the perspective of the end customer, but also helps specifically to meet internal stakeholders as well.

Potential for more value with less work
Specialists like underwriters, risk experts and insurance claim handlers spent a lot of time on repetitive assessments and decisions. When these points in the business process can be automated the specialists can focus on more challenging activities and provide the clients with much added value. Often rules-based decision models are composed first, but these have their limitations. From experience we know that decision models augmented with AI can further increase the straight-through-processing ratio and make the process more accurate and consistent.

Two parties are cause for uncertainty; clients and Risk & Compliance
Those AI models that can most contribute to business results can also potentially most impact clients. An AI model that wrongly assesses credit worthiness or insurance risk can of course cause much more harm than a model that wrongly assesses online click behavior or cross-sell opportunity. These high-stakes models are therefore very much in the attention of several internal and external stakeholders such as; regulators, risk officers, clients and client representatives.

Because of this Risk & Compliance teams often place strict demands on model validation and monitoring. The more complex and more impactful the model, the more difficult it is to meet those demands. Many organisations also do not have much experience with this. Next to that, financial services providers are uncertain whether their clients and intermediaries will accept the outcomes of these strongly differentiating, high-impact models. These uncertainties in both front office and back office result in resistance and slow down innovation.

Client acceptance as the leading perspective
The framework below provides guidance to get AI accepted by both internal and external stakeholders. It is composed from the client perspective, so it appeals more and provides more energy compared to only thinking in terms of rules and obligations.

In short it comes down to this: As a client you want to understand how a (automated) decision was reached; you want to feel it’s a decision you can live with; and you want to be able to work with it. We elaborated on these points.

1.     The outcome must be explainable
You must be told what data was actually used and how that data led to the outcome. But to be really considered as explainable the explanation of the outcome must also be perceived as ‘logical’. In other words, the explanations must align to our understanding of how the world works.
Of course, the model must also be consistent, such that outcomes and explanations for similar cases do not differ much.

2.     The explanation must be acceptable
The data used must not be perceived a violation of privacy and so too for the insights obtained from it (using a predicted possible pregnancy, or upcoming company ownership transfer, deducted from a change in transaction patterns is not appreciated by many clients).
Next to this, the model’s outcomes need to be unbiased towards vulnerable groups, or towards sensitive characteristics such as gender or ethnicity of individuals or in the composition in a labor force.

3.     With a sense of control and benefit for the client
To provide clients with a sense of satisfaction, even in the case of a negative outcome, they must experience a sense of control. This requires the possibility of human intervention in the decision process ànd that the AI model is predictably influenceable. This last requirement allows explaining the client what she can do herself to obtain a better outcome. Of course, it is important that ‘good behavior’ subsequently does indeed lead to better scores and lower interests and rates as a result.

When a decision model can realize the perceptions above, there does not need to be much uncertainty about whether clients will accept the application of AI. Naturally, the service provider must still be able to communicate all of this well enough so that clients are informed complete and in such a way that the above perceptions are indeed effectively transferred.

Killing two birds with “one” stone
Underneath the three client perceptions are quite a few requirements. However, we do have the technology and methodology to realize these. The good news is also that once you realize the three client perceptions you will be able to cover almost anything that Risk & Compliance may need to demand. Because in order to make AI models explainable and acceptable to clients and to have control over the outcomes you must apply specific methods and reach a certain level of control and transparency. With it, you can also meet the Risk & Compliance demands.

So the next time you contemplate a solution to apply AI in high-impact decisions only three questions are relevant: “Can we explain this to the client?”, “Will he/she find it acceptable?” and “Will there be a certain level of control for the client over the outcome?”.

This will reduce much uncertainty and many puzzle pieces will fall into place. The client perspective energizes most professionals more than the necessity of following difficult rules.

We are open to discussing all sorts of matters that involve data science and AI. Especially those that touches upon organisational aspects. Feel free to contact us for a cup of coffee!

Contact
Mando Rotman
E: mando.rotman@igh.com

Why ethical reasoning should be an essential capability for Data Science teams and two concrete actions to kickstart your team on ethical knowledge 

By Data science, News

Wherever new technology is introduced, ethics and legislation will trail behind the applications. The field of data science cannot be called new anymore from a technical point of view, but it has not yet reached maturity in terms of ethics and legislation. As a result, the field is especially prone to make harmful ethical missteps. 

How do we prevent these missteps right now, while we wait for — or even better: work on — ethical and legislative maturity? 

I propose that the solution lies in taking responsibility as a data scientist yourself. I will give you a brief introduction on data ethics and legislation, before I reach this conclusion. Also, I will share a best-practice from my own team, which gives concrete actions to make your team ethics-ready. 

“But data and models are neutral in itself, why worry about good and bad?” 

If 2012 denoted the kickoff of the golden age of data science applications — through the crowning of data science as the ‘Sexiest job of 21st century’, 2018 might be the age of data ethics. It is the year where the whole world started forming an opinion on how data may and may not be used. 

The Cambridge Analytica goal of influencing politics clearly fell in the ‘may not’ camp. 

This scandal opened up major discussion about the ethics of data use. Multiple articles have since then discussed situations where the bad of algorithms outweighed the good. The many examples include image recognition AI erroneously denoting humans as gorillas, the chatbot Tay which became too offensive for Twitter within 24 hours and male-preferring HR algorithms (which raises the question: is data science the sexiest, or the most sexist job of the 21st century?). 

Clearly, data applications have left neutral ground. 

In addition to — or maybe caused by — attention from the public, large (governmental) organisations such as Googlethe EU and the UN now also see the importance of data ethics. Many ‘guidelines of data/AI/ML’ have been published, which can provide ethical guidance when working with data and analytics. 

It is not necessary to enter the time-consuming endeavour of reading every single one of these. A meta study on 39 different authors of guidelines shows a strong overlap in the following topics: 

1) Privacy
2)
Accountability
3) Safety and security
4) Transparency and explainability
5) Fairness and non-discrimination 

This is a good list of topics to start thinking and reading about. I highly encourage you to deeper investigate these yourselves, as this article will not explain these topics as deeply as their importance deserves. 

Legal governance, are we there yet? 

The discussion on the ethics of data is an important step in the journey towards appropriate data regulation. Ideally, laws are based on shared values, which can be found by thinking and talking about data ethics. To write legislation without prior philosophical contemplation would be like blindly pressing some numbers at a vending machine, and hoping your favourite snack comes out. 

Some first pieces of legislation aimed at the ethics of data are already in place. Think of the GDPR, which regulates data privacy in the EU. Even though this regulation is not (yet) fully capable of strictly governing privacy, it does propel privacy — and data ethics as a whole — to the center of the debate. It is not the endpoint, but an important step in the right direction. 

At this moment, we find ourselves in an in-between situation in the embedding of modern data technology in society: 

  • Technically, we are capable of many potentially worthwhile applications. 
  • Ethically, we are reaching the point we can mostly agree what is and what is not acceptable. 
  • However, legally, we are not in a place where we can suitably ensure that the harmful applications of data are prevented: most data-ethical scandals are solved in the public domain, and not yet in the legal domain. 

Responsibility currently (mostly) rests on the shoulders of Data Scientists 

So, the field of data cannot be ethically governed (yet) through legislation. I think that the most promising alternative is self-regulation by those with the most expertise in the field: data science teams themselves. 

You might argue that self-regulation brings up the problem of partiality, I do however propose it as an in-between solution for the in-between situation we find ourselves in. As soon as legislation on data use is more mature, less–but never zero–self-regulation is necessary. 

Another struggle is that many data scientists find themselves in a split between acting ethically and creating the most accurate model. By taking ethical responsibility, data scientists also receive the responsibility to resolve this tension. 

I am persuadable with the argument that the unethical alternative might be more expensive in terms of money (e.g. GDPR fines) or damage to company image. Your employer or client may be harder to convince. “How to persuade your stakeholders to use data ethically” sounds like a good topic for a future article. 

My proposal has an important consequence for data science teams: next to technical skills, they would also need knowledge on data ethics. This knowledge cannot be assumed to be present automatically, as software firm Anaconda found that just 18% of data science students say they received education on data ethics in their studies. 

Moreover, just a single person with ethical knowledge wouldn’t be enough, every practitioner of data science must have basic skill in identifying potential ethical threats of their work. Otherwise the risk for ethical accidents remains substantial. But how to reach overall ethical knowhow in your team? 

Two concrete actions towards ethical knowledge 

Within my own team, we take a two-step approach: 

1)group-wide discussion on what each finds ethically important when dealing with data and algorithms 

2)construct a group-wide accepted ethical doctrine based on this discussion 

In the first step we educate the group on the current status in data ethics in both academia and business. This includes discussing problems of data ethics in the news, explaining the most prevalent ethical frameworks, and conversation about how ethical problems may arise in daily work. This should enable each individual member to form an opinion on data ethics. 

The team-wide ethical data guidelines constructed in the second step should give our data scientists a strong grounding in identifying potential threats. The guidelines shouldn’t be constructed top down; the individual input that comes out of the group-wide discussions forms a much better basis. This way, general guidelines that represent every data scientist can be constructed. 

The doctrine will not succeed if constructed as a detailed step-by-step list. Instead, it should serve as a general guideline that helps to identify which individual cases should be further discussed. 

Precisely that should be a task of the data scientist: ensure that potentially unethical data usage will not go unnoticed. Unethical usage not only by data scientists, but by all colleagues who may use data in their work. This way, awareness for data ethics is raised, which enables companies to responsibly leverage the power of data. 

In short: start talking about data ethics
We are technically capable of life-changing data applications, however a safety net in the form of legislation is not yet in place. Data scientists walk a tightrope over a deep valley of harmful application, where overall knowledge of ethics acts as the pole that helps them balance. By initiating the proper discussion, your data science team has the tools to prevent expensive ethical missteps. 

As I argue in the article, discussion on data ethics propels the field towards maturity, such that we can arrive at a “rigorous and complex ethical examination” of data science. So, engage in discussion: be critical about this content, form an opinion, talk about it, and change your opinion often as you encounter novel information. This not only makes you a better data scientist; it makes the whole field better. 

 Contact
Tom Jongen
E: Tom.jongen@igh.com

 

What Data Science Managers can learn from McDonalds

By Data science, Insurance, News

Insurers and intermediaries digitize their companies more and faster. This has implications for the organizational functions that support this such as the Data Science and AI teams. From our recently conducted Data Science Maturity Quick scan among Dutch Insurers we learn that nearly all companies currently organize their Data Science in the same way, centrally. However, the front runners are now about to transition to a different, hybrid, organizational model. Determining the best organizational model for the Data Science function turns out not to be simple.

How do you organize Data Science and AI as a scalable corporate function, when you can no longer keep it centrally organized and also do not want to switch to a very decentralized, hybrid model?

Three basic models
Actuary expertise and business intelligence have been part of the insurance business for a long time. But since about two decades people with the title Data Scientist started to appear. These professionals usually worked on non-risk use cases, such as in Sales, Marketing & Distribution, Fraud and Customer Service and they wanted to apply the latest, often non-linear, Machine Learning techniques (ML). The introduction of this kind of work often happened very decentralized and scattered throughout the organizations. But as the reputation and expectations of their work grew, these new professionals were often grouped together in central Centers of Competence (CoC).

Fig 1. Three basic models for organizational functions

The CoC model brings some advantages for a new data science function, when compared to the decentralized model. Especially when the company is not yet really functioning like a digital, data driven organization. Five out of six organization in our Maturity Quick Scan have organized their data scientists in a CoC model. However, at some companies, the digital transformation is getting serious and, in that case, strong centralization can result in a capacity bottleneck. Or be the cause of too big of a gap between business and data sience teams in terms of knowledge, priorities and communication.

Switching from a centrally organized model to a more hybrid model is often advised as a best-of-both-worlds solution. This should make it easier to scale up and align knowledge and application of data science closely with the day to day business, while a select number of activities and governance can remain central.

Concerns over hybrid models
Spotify developed a popular hybrid version that has become known as the Spotify model with its Squads, Tribes, Chapters and guilds1. This type of model has become popular among the digital natives and e-commerce companies in this world. But many data science managers in Dutch insurance companies have their concerns about this highly decentralized version of a hybrid model. These concerns are;

  • Data culture and scale of the AI applications may be insufficient still to maintain the needed continuous development and innovation
  • There are often still company-wide use cases yet to be developed that could be realized much more efficiently centrally
  • Next to that, IG&H has identified there is a large gap in data science maturity between corporate insurance and consumer insurance2. A more central organization model can help to narrow that gap

This is why in practice we often choose for a more central version of the hybrid model. But every hybrid (re)organization raises questions such as;

  • How do you keep the complexity of funding and governance down?
  • How do you maintain sufficient control over standards, continuous development and innovation?
  • Which activities and responsibilities do you keep central and which not?

A different perspective; a successful retail business model
Thinking about these questions from a different perspective or from the perspective of a different sector can help to find answers. We do this a lot to solve problems at IG&H. In this article we make an analogy with a good ol’, trusted, business model. In the late fifties of the 20th century fast food giant McDonalds started to conquer the world in rapid pace. The McDonalds concept (branding and methodology) was a proven success, but the explosive, profitable growth was partly made possible by the chosen business model for expansion, the franchise model.

A franchise is a type of business that is operated by an individual(s) known as a franchisee using the trademark, branding and business model of a franchisor. In this business model, there is a legal and commercial relationship between the owner of the company (the franchisor) and the individual (the franchisee). In other words, the franchisee is licensed to use the franchisor’s trade name and operating systems.

In exchange for the rights to use the franchisor’s business model — to sell the product or service and be provided with training, support and operational instructions — the franchisee pays a franchisee fee (known as a royalty) to the franchisor. The franchisee must also sign a contract (franchise agreement) agreeing to operate in accordance with the terms specified in the contract.

A franchise essentially acts as an individual branch of the franchise company.

Fig 2. Illustration of the franchise model
Credits: https://franchisebusinessreview.com/post/franchise-business-model/

The franchise model offers some important benefits to a business concept that help to expand fast and durable;

  • Access to (local) capital
  • Entrepreneurship of local establishments
  • Presence close to local customers
  • Ability to facilitate centrally what locally cannot be realized in a cost-efficient way, or what is an essential part of the success formula (e.g. R&D, Production, Procurement and Marketing)

Each of these qualities can be related to lessons learned and best practices of scaling up Data Science within an organization. As the franchise principle made it possible for McDonalds to rapidly expand into the world, it can also help expand Data Science throughout the company.

Applying franchise model principles
Many insurers desire a more adaptable and entrepreneurial company culture. Ownership, client focus and flexibility are key words. The principle of local entrepreneurship in the franchise model fits very well with this. It encourages both the business responsible as well as the data scientist to work very client-focused and always with a business case to achieve.

Data Science is a broad discipline and it rapidly evolves while it is often not yet embedded well in most companies. This makes a central organizational element very valuable to ensure quality standards, continuous development and innovation. Just like the franchiser in the franchise model often facilitates certain means of production, R&D and best practices. One of the front runners in the Dutch insurance market with respect to organizing their data science is therefore transitioning to franchise-inspired organization model for their data science function.

So, franchise-inspired thinking can be valuable for Data Science organizations that want to move beyond the CoC, but not (yet) want to move to a very decentralized hybrid model like the Spotify model. However, which organizational model and specific choices are the best fit is naturally dependent on multiple variables. Among others; company structure and company culture, the digital maturity of the organization and the maturity of the data science function itself. Because the value of data science and AI are completely dependent on the quality of collaboration with the business and on broad application throughout the company it is vital to objectively assess the current situation and to strike the right balance between centralized and decentralized, with a company-specific touch.

Contact
Mando Rotman
E: mando.rotman@igh.com
Manager Data Science IG&H

Jan-Pieter van der Helm
E: janpieter.vanderhelm@igh.com
Director Financial Services IG&H

1) https://agilescrumgroup.nl/spotify-model/
2) https://www.igh.com/news/page/2/
3) https://franchisebusinessreview.com/post/franchise-business-model/

 

Strategic moves to win post COVID-19 with Data Science 

By Data science, News

The COVID-19 pandemic has shaken up 2020 for many. This year marks a period where uncertainty is suddenly much more common than certainty and it seems like it’s here to stay. In business, we see that although some thrive in uncertainty, many more require thorough recalibration to many possible variants of the new normal. In the field of Data science, we thrive in uncertainty while we rely on statistics and AI as a golden compass to navigate through the fog. In the fog, we are dealing with changing, dynamic consumer demand, while physical customers are more distant than ever before. This counts for many industries and markets: supply chains will operate at a different pace, and capacity management in hospitals is a completely different ball game than before. Having a strong data science capability will be of value to adapt and thrive as a business. For some that means starting at absolute zero, for all it will mean that making the right moves is essential. 

This article gets you going with the five key strategic moves to win in the post COVID-19 game. Furthermore, it lists several must-know techniques and data science terminology that will kickstart you into having the right conversation with your lead data scientist.

1) Gather the troops and move into formation To be able to quantify uncertainty, and to move away from it as much as possible, you will need a data science team. Your data science team will be crucial in all next steps. They will be crucial to leverage the right data, predict what will happen next, help in automation and play a defining role in building high quality customer relationships from a distance. Typically, a well performing data science team includes people with complementary skillsets that include knowledge of at least statistical modelling, AI / ML, data engineering and project & people management. If you are short on people, an upcoming economic turndown may increase availability of talent in the data science arena. 

Even if you already have a team in place, now is the time to review their positioning. Often, the impact of the team is optimal when positioned close to the board room where strategic decision making takes place. From there they can be deployed on high-impact projects where data science will be of additional value besides traditional analyst and BI roles. 

Last but not least, educate all within your company on different levels of data literacy. Look inside the company for anyone that can be retrained now that internal and external demand is shifting. Leverage scale and what is already there, MOOCs deliver great value at a small investment. Build a culture that finds its foundation in data and mathematic reasoning, by demanding insight as key ingredient (instead of just some seasoning) of decision making. The post-COVID-19 world will be different from the one we knew, so make sure your people leverage data to act on now instead of experience in a no longer relevant pre-COVID-19 world. 

2) Plug in the data exhaustIncreasing reliance on data and an active data science team will make your company data-hungry. Although sensitive data should be safeguarded and the right policies should be implemented, you should refrain from the urge of applying a lockdown to parts of your data warehouse. Instead, let creativity thrive and democratize insensitive data.  

So what does this mean? It means that today’s cross-department data should be available for your data science team, and the rest of the organization. Democratization of the data that you already have will increase transparency and enhance collaboration over departments. To make that work in terms of technology, move (part) of your data to the cloud and consider NoSQL solutions like Apache Cassandra or MongoDB to make your data available to many interfaces. Furthermore, actively search and uncover for ‘dark’ data that is only available to some, or never tapped into despite being extremely valueable. Last but not least, measure quality and optimize for speed in your data infrastructure such that applications & data science models generate timely insights that make a high impact. 

Additionally, we see that COVID-19 changes society and individual values and norms. To stay ahead, you will need to rethink what data to consume and record within the company and experiment accordingly. Do not be evil, but rethink and tune your ethical approach on data collection to improve your products and services constantly to the ‘new normal’. 

3) Look beyond the fog
More complete data will allow you to make precise measurements on your advancement to success. But most of your forecasts and (in-production) AI / ML models will be confused by the current situation. They are all trained on pre-COVID19 data, let alone that post-COVID19 data is hardly available (yet). Still, you need to look beyond the fog to steer clear from danger. What are some best practices? 

First of all, make sure that your models are properly implemented. The business and your data science team need to know as soon as possible when accuracy drifts. Suggest your lead data scientist to implement automated retraining of models using solutions such as Kubeflow, Azure ML, or AWS Sagemaker. Although some human intervention may sometimes be required, it ensures that your models are updated regularly using the latest data. 

Second, implement and apply models that require less data or do not need any data. With low data, use simple machine learning models to avoid inaccurate models by e.g. overfitting. Overfitted models fit very well to a small training dataset, but also feature relations that are non-existent in the real world. Talk to your data science teams about RidgeRegression, KNN, or Naïve Bayes to work with less data. 

Third and last, consider generating scenario data that might replicate the future ahead. Together with the business, your data science team might be able to generate several future scenarios. If your data science capability is more advanced, support these scenarios with data by generating the data using a generative deep learning technique (GANs). Previously GAN-driven synthetic data only seemed applicable in the areas of renewables¹, but the lack of data that COVID-19 causes will propel this novel application. Going forward, you can leverage machine learning to determine which scenario resembles the current state of your company, and thus gives the most accurate predictions about the future. 

4)  Automate for efficiency
Although the most apparent post-COVID-19 challenge seems to be to win in uncertainty, winning in operational efficiency should not be overlooked in uncertain times. While markets are volatile, shrinking, and growing in short times, the winning players might be those that play the scaling game well. In this case, we can learn from previous economic downturns where players that focus on technology and data science driven automation will win during and after a recession². 

Data science powered decision-making helps cutting costs by shifting FTEs away from repetitive tasks to more areas where they can be more valuable. Furthermore, it allows you to scale operations faster, which may be beneficial to sales & operations teams needing to outrun your competitors. Automation is a marriage between technology and data science, but beware of the complexity monster. Machine learning is not a silver bullet, most automation problems are solved for 90% by simple data engineering or by evaluating decisions using simple business rules. If you are looking for tools that are strong in workflow automation, discuss the implementation of Airflow, or look into the new-kid-on-the-block: Prefect. 

Automating can be a safe bet if focused on long hanging fruit, where the business case is strong. The complexity is most often in the open identification of these cases. Furthermore, make sure that you educate your staff, and that best practices are being shared. Finally, make sure people win in automating their own responsibilities. Only then they will not fear to lose influence and their joy of work, which is a must in making automation work post-COVID-19. 

5) Build high quality distant customer services and relationships
It seems that for society, keeping distance will remain a key value in a post-COVID-19 world. We see that consumers have been forced into fulfilling their needs at a distance, which creates customer habits and expectations that are likely to stick³. These renewed expectations are very likely to propagate to other industries, and you probably want to prepare for it, even in a 2B setting. 

You can strengthen your digital relationship by interacting on a more personal level with your customers, powered by data science. Practically this may mean predicting which services and products will match personal needs or optimizing availability to specific times and locations. Or, communicating in the language that appeals most to your customers. By plugging in the exhaust (step 2) on customer data your data science team will be able to segment customers using clustering techniques. Going forward, you can apply carefully set up A / B experiments to quickly learn about and optimize online services. 

Getting this to work means querying data science models in real-time e-commerce like conditions. Being fast is crucial. In order to do this, you will need streaming analytics to process and evaluate data in motion, while events are happening⁴. Streaming analytics opens up applying data science models in environments where we were too slow previously, waiting for the data to arrive or the model to generate an answer. Although the streaming analytics set-up of today is more complex than a traditional set-up, this is definitively an area to watch and apply where no other solution will work. 

What you should be doing next
In conclusion, evaluating how each of the five steps can be relevant to your company and position can be of great help to navigate these challenging times. Each step should inspire you in some way and generate material for discussion within your company. We’ve written this article generate more questions but especially to foster discussion. As a first step you might want to reconnect with your lead data scientist to explore next steps. We’re happy to facilitate if you need help. 

Furthermore, we look forward to hearing what your experiences will be in a pursuit to drive success using data science in a post-COVID-19 world. Good luck! 

Contact
Tobias Platenburg
E: tobias.platenburg@igh.com

This article has also been published on Towards Data Science 


 [1] C. Jiang et al., Day-ahead renewable scenario forecasts based on generative adversarial networks (2020), TechrXiv 

[2] Walter Frick, How to Survive a Recession and Thrive Afterward (2019), Harvard Business Review May-June 2019 issue 

[3] Jasper van Rijn, The breaktrough of online shopping as the new standard (2020), IG&H blog series: How Retailers can rebound from the Corona crisis 

[4] Databricks, Streaming Analytics (viewed 2020), Databricks Glossary 

Clientcase: AI-driven Commercial Credit Process

By Banking, Clientcases, Data science, News

What they wanted
Our client wanted to improve their commercial credit process for real estate clients and transform it to be more risk-based, data driven and efficient. This market-leading Commercial Bank experienced the need to maintain their competitive edge and contribute to company-wide cost reductions. Also important was the objective of freeing up capacity of their Front Office and Risk team specialists. These experts should focus more on new business, on innovation and on the biggest risks. Their commercial credit process was highly manual and especially the credit risk reviews required a lot of back-and-forth, precious time and lots of information.

What we did
We started by quickly building the value case and aligning the required stakeholders. Next, we introduced AI to largely automate the annual credit risk review cycle that was taking up thousands of hours each year. The client’s credit specialists trained our AI algorithms to assess the need for risk reviews. We used the specialists’ input and feedback to design the total solution in such a way that it was transparent, interactive and customizable. During a pilot the proof of value was highly convincing and the enthusiasm among specialists and senior management grew further. Then we started to use the AI model in practice, harvesting value and lessons learned, while preparing and realizing the full IT, process and Organizational change.

What we achieved
Within a few months the first AI model runs in production and automates >80% of all credit risk reviews. It outperforms the experts consistently in accuracy and helps redirect €500k annually in manual FTE.

AI not only accelerates the process and reduces costs; it also provides whole new capabilities. Using the AI model our client can now monitor risk on portfolio level and case level continuously. The model can also be used for quick scans of scenarios to spot which cases likely need first attention should the real estate market, or an individual client’s circumstances change. Building out new decision models for other parts of the process is in progress.

What they said
“Initially I was doubtful about the benefits of AI in real estate financing. The results have now completely convinced me”  – General manager Real Estate Finance –

Become a true Data Driven Organization

By Banking, Data science, News

In Commercial Banking it is increasingly important that business processes are digital, data driven and can leverage AI. In the current times of unexpected change we see this magnified. IG&H data scientists observe that organizations who already transformed their processes now truly benefit.

Commercial banks are confronted with a sudden wave of SME client requests, changed risk drivers and changes in risk profiles. Banks want to help and need to figure out what (temporary) policy changes would be meaningful for clients. And also, what the impact of specific changes would be on the bank’s business.

Those who have already transformed their processes are now able to handle this situation much faster and more confidently. Their business processes are already more efficient and more consistent. And in the current time of crisis they also prove to be much more Scalable, Transparent, and Adaptable and they offer more options for looking forward in a smart way.

Scalable
Digital, data driven business processes with a high rate of straight-through-processing and where decisions are made (partly) by AI decision models, require much less human effort. Therefore, they can deal more easily with peaks in workload, especially in times when human capacity may be limited.

This benefit can only fully materialize when there are no bottlenecks in other parts with a crucial dependency. This stresses the fact that individual point solutions are not the way to go. The effective way is a transformation to become a true Data Driven Organization in People, Process, Data and Technology.

Transparent
Monitoring the impact of the current situation on the client experience, on process performance metrics and on KPIs is much more accurate and near real-time in a data driven process. This facilitates communication and coordination throughout the organization and allows management to take more effective actions.

For example: Dashboards can quickly be shared to observe what is really happening. Such as which teams have the highest workload increase. Or where clients’ payment behavior is most impacted.  Analytics can be used to signal early warning indicators such as trends and significant deviations.

Adaptable
AI decision models and business rules can be configured easily to effectuate policy changes like (temporary) higher risk thresholds, lowering the weight of specific risk drivers, higher or lower maximum values, etcetera.

For example: It can be easier to change a few parameters in a risk review decision model, than it is to communicate such changes to whole departments of specialists and coach them to quickly and consistently execute these.

Smart forward looking
Finally, AI decision models can be used to ‘test out’ different scenarios and evaluate very fast the likely effects on individual loans and on portfolio level.

For example: Changing the values of specific risk variables along the lines of different scenarios and observing the predicted effects, is being used to zoom in on those clients who likely require first attention.

AI models can be a very powerful tool to provide insight in likely future outcomes. A data scientist and business specialist who understand how the underlying machine learning works and on what data it was trained can provide a range of quick scan insights within a very short turnaround time.

IG&H’s data scientists and banking consultants continue to work with clients (especially now) to transform commercial banking organizations to remain competitive and benefit from being a true Data Driven Organization.

Would you like to talk about what you can do while your processes are not yet as digital and data driven as you would like? How you can best take the first step? Or how you can leverage your first progress and truly turn the corner to transform into a Data Driven organization? We are ready to help you explore and make data work! Just drop me a note!

Mando Rotman
Manager Data Science IG&H
E: mando.rotman@igh.com

Acquiring own Data Science competencies

By Casestudys, Data science

What were the client’s needs?​

Over the past few years, our client, a leading logistic service provider with 30,000 customers, has centralised its BI function and taken major steps in Data Management. With a view on the increasing speed of technological developments, the need to innovate, and the utilisation of its data assets, we received the request to assist the client with the structure of a Data Science Team. IG&H already demonstrated the added value and potential of Data Science in previous projects.​

What was our approach?​

Data Science is a team effort. A Data Science team includes a range of competencies and areas of expertise and thus various functions such as a Data Scientist or Data Engineer. The team interacts with other teams within a company, such as BI, IT, Marketing, or Sales. Building up and integrating a new team within an existing organisation has a significant impact and faces many challenges. ​

IG&H has its own experience in this regard during the implementation of our Analytics practice. Supported by the best practices that we developed during this process, we were able to assist the client in compiling their own Data Science team.​ In addition to structuring the team, we also focused heavily on the collaboration with other teams and on the management of the new team.

What did we achieve? ​

In a period of 3 months, we have laid the foundation for a new competency that adds value to the organisation right from the word go. This enables the client to lift its provision of services to a higher level and, even more importantly, to remain competitive and relevant in a rapidly changing market.​

Granular share of wallet data for all major product lines for truly data-driven sales management

By Data science, Insurance, Uncategorized

What they wanted
Make data-driven choices in the broker market: that is what leading Dutch omnichannel insurers want to be able to do. Key questions are: Who are today and tomorrow’s leading brokers? Where do we stand in terms of both volume and NPS? How do we enhance our position to realize sustainable growth? To this end, they wanted to gather in-depth data on volume, movements, share of wallet, and NPS. Read More