When AI-driven decisions are acceptable to clients ànd Risk & Compliance

By Data science, News

Artificial Intelligence (AI) algorithms are commonly used in many sectors. Yet, many financial services companies still apply AI very limited. Especially for those decisions that could very significantly impact the business results. Uncertainty about the acceptance of its uses by internal and external stakeholders is a main reason. What is this uncertainty and what handles can help? We offer a client-centric framework for such handles and illustrate these with one of our projects. The framework is composed from the perspective of the end customer, but also helps specifically to meet internal stakeholders as well.

Potential for more value with less work
Specialists like underwriters, risk experts and insurance claim handlers spent a lot of time on repetitive assessments and decisions. When these points in the business process can be automated the specialists can focus on more challenging activities and provide the clients with much added value. Often rules-based decision models are composed first, but these have their limitations. From experience we know that decision models augmented with AI can further increase the straight-through-processing ratio and make the process more accurate and consistent.

Two parties are cause for uncertainty; clients and Risk & Compliance
Those AI models that can most contribute to business results can also potentially most impact clients. An AI model that wrongly assesses credit worthiness or insurance risk can of course cause much more harm than a model that wrongly assesses online click behavior or cross-sell opportunity. These high-stakes models are therefore very much in the attention of several internal and external stakeholders such as; regulators, risk officers, clients and client representatives.

Because of this Risk & Compliance teams often place strict demands on model validation and monitoring. The more complex and more impactful the model, the more difficult it is to meet those demands. Many organisations also do not have much experience with this. Next to that, financial services providers are uncertain whether their clients and intermediaries will accept the outcomes of these strongly differentiating, high-impact models. These uncertainties in both front office and back office result in resistance and slow down innovation.

Client acceptance as the leading perspective
The framework below provides guidance to get AI accepted by both internal and external stakeholders. It is composed from the client perspective, so it appeals more and provides more energy compared to only thinking in terms of rules and obligations.

In short it comes down to this: As a client you want to understand how a (automated) decision was reached; you want to feel it’s a decision you can live with; and you want to be able to work with it. We elaborated on these points.

1.     The outcome must be explainable
You must be told what data was actually used and how that data led to the outcome. But to be really considered as explainable the explanation of the outcome must also be perceived as ‘logical’. In other words, the explanations must align to our understanding of how the world works.
Of course, the model must also be consistent, such that outcomes and explanations for similar cases do not differ much.

2.     The explanation must be acceptable
The data used must not be perceived a violation of privacy and so too for the insights obtained from it (using a predicted possible pregnancy, or upcoming company ownership transfer, deducted from a change in transaction patterns is not appreciated by many clients).
Next to this, the model’s outcomes need to be unbiased towards vulnerable groups, or towards sensitive characteristics such as gender or ethnicity of individuals or in the composition in a labor force.

3.     With a sense of control and benefit for the client
To provide clients with a sense of satisfaction, even in the case of a negative outcome, they must experience a sense of control. This requires the possibility of human intervention in the decision process ànd that the AI model is predictably influenceable. This last requirement allows explaining the client what she can do herself to obtain a better outcome. Of course, it is important that ‘good behavior’ subsequently does indeed lead to better scores and lower interests and rates as a result.

When a decision model can realize the perceptions above, there does not need to be much uncertainty about whether clients will accept the application of AI. Naturally, the service provider must still be able to communicate all of this well enough so that clients are informed complete and in such a way that the above perceptions are indeed effectively transferred.

Killing two birds with “one” stone
Underneath the three client perceptions are quite a few requirements. However, we do have the technology and methodology to realize these. The good news is also that once you realize the three client perceptions you will be able to cover almost anything that Risk & Compliance may need to demand. Because in order to make AI models explainable and acceptable to clients and to have control over the outcomes you must apply specific methods and reach a certain level of control and transparency. With it, you can also meet the Risk & Compliance demands.

So the next time you contemplate a solution to apply AI in high-impact decisions only three questions are relevant: “Can we explain this to the client?”, “Will he/she find it acceptable?” and “Will there be a certain level of control for the client over the outcome?”.

This will reduce much uncertainty and many puzzle pieces will fall into place. The client perspective energizes most professionals more than the necessity of following difficult rules.

We are open to discussing all sorts of matters that involve data science and AI. Especially those that touches upon organisational aspects. Feel free to contact us for a cup of coffee!

Contact
Mando Rotman
E: mando.rotman@igh.com

Low-code in healthcare: 5+1 real life examples

By Health, News

If there is one benefit of the Covid-19 crisis, it is the growth of digital remote care. Resuming regular care in a 1.5-meter setting is simply not possible without digital applications. However, new solutions are needed quickly. These are preferably also affordable, easily adaptable and scalable without any problems. This is at odds with how we have known IT development in healthcare up to now and therefore a different approach is needed. Low-code platforms can provide a solution. They are known for being fast, cheap and flexible. This article uses five plus one examples to illustrate how low-code can make healthcare more digital. 

The advantages of remote care such as less travel, less waiting and less risk of infections have often been highlighted in recent years. Nevertheless, development has always lagged far behind expectations. This has now changed due to the Covid-19 pandemic. The use of digital applications in communication, monitoring and treatment increased rapidly, as did the demand for new applications. More and more patients and healthcare providers are opting for “at home when possible and at the healthcare provider if necessary“. 

Now more than everhealthcare does not benefit from too complex and costly IT processes, which will result in a cumbersome solution after a long time. On the contrary, applications with high ease of use are needed within weeks so patients and caregivers can use them quickly and care delivery can continue and improve. If care provision changes, rapid and controlled adaptation of care is a must. Also, to prevent us from reverting to old behaviour. 

It is striking that, in contrast to other sectors, little is developed with low-code in healthcare. While low-code is intuitive, iterative and flexible and lends itself to (patient) portals, apps or even complex back offices. Developers do not need to master a programming language, but only need to know a program where they set configurations in a graphical user environment. Low-code is therefore fast and adaptive: developers can test the (new) needs of healthcare providers and / or patients directly during development. Another advantage is that it easily integrates with existing IT systems and standards (such as HL7), so new functionalities are added to the existing systems without disrupting the current operation. Leading research firm Gartner expects that by 2024, 65% of all applications will be co-developed or managed with low-code. Well-known players are OutSystemsMendix and Betty Blocks, which already have various applications in healthcare, especially internationally. 

National Coordination Center for Patient Distribution (The Netherlands)
Shortly after the seriousness of the Covid-19 crisis in the Netherlands became clear, the National Coordination Center for Patient Distribution (LCPS) was established. The aim of LCPS is spreading the patient care workload as effectively as possible throughout the Netherlands. To perform this assignment properly, insight is required into the most up-to-date information about available beds and transport capacity. In less than two weeks, an application, the coordination platform, was developed and made operational with low-code to provide this insight into all hospitals in the Netherlands and some in Germany. The coordination platform is used to process the transport movements of patients on request by matching supply and demand. Part of this is finding the best hospital and suitable transport for each patient based on 90+ different input variables. In addition, the platform provides reports that are in the news nationwide.  

Kermit (United States)
The American Kermit developed a low-code analysis platform for medical implants such as pacemakers and insulin pumps within nine months. The application manages contracts and invoices and monitors supplier compliance. The entire process is transparent: from unpacking the material during the treatment to sending the invoice and payment to the supplier. The data-driven platform maps trends to optimize processes, provides buyers with information about fraud and prices, and provides specialists with information for treatment choice. The Kermit platform is now running in 23 hospitals, saving on average 30% of their costs for medical implants. 

Saga Healthcare (United Kingdom)
Years ago, the English Saga entered the homecare market in its own country. The big difference with other healthcare providers was that Saga focused on an agile technology platform. The IT team of 

Saga was able to deliver SACHA, a homecare planning system, within six months. The built application automates a huge amount of manual tasks so that caregivers can use this time for personal care of clients. Building with low-code was mainly of added value for Saga because the expertise was immediately embedded within its own IT department. As a result, it kept control in its own hands without having to commit to third parties. 

Medtronic (United States)
Medtronic has been one of the market leaders in medical devices such as heart implants for years. These implants are constantly collecting data from patients all over the world. It is very complex for healthcare providers to extract timely and actionable insights for the care and well-being of patients from the enormous amounts of data. Therefore, Medtronic built FocusOn in six months based on low-code, which filters 80% of the data for healthcare professionals. In addition to the fact that healthcare professionals can now deliver faster and better remote triages, the application of the low-code platform has also resulted in 50% IT budget savings. The platform makes it quite simple for new clinics to join this new technology: within 15 minutes, new customers and end users are ready to use. Since its launch in 2018, more than 335,000 triages have been performed through FocusOn, saving clinical staff time for 27 year.  

Kuwait Maternity Hospital (Kuwait)
Kuwait Maternity Hospital is one of the largest hospitals in Kuwait. The biggest problem for the hospital was the lack of insight into patient and capacity information due to the paper administration. Within twelve weeks, an external party put a Hospital Management System (HMS) live on low-code. This system offers the user a uniform patient view and provides real-time information for care managers: from the number of occupied beds and appointments to the number of operations and emergencies per day. Within a few weeks of implementation, the total registration time per patient decreased from 45 to 15 minutes. The number of errors in the patient file has also been reduced by 60 percent and communication between hospital departments has improved significantly. Due to its success, five other hospitals are now also using the system. 

National Health Service (United Kingdom)
The National Health Service (NHS) is known as the United Kingdom’s public health system. Especially for doctors with mental health problems, there is a Practitioner Health Program (PHP) within the NHS with free confidential care. The idea behind this is that doctors can return to work faster and more vital after treatment. The NHS started the program for doctors in the London area, but wanted to expand across the country in 2016. To also be able to offer the same confidential service nationwide, PHP has built a mobile app and a fully automated GP care system in seven weeks in low-code. With the app, healthcare providers can find therapists in their area and make an appointment anonymously. The app has now been used by more than 2000 doctors. 

Conclusion
The development of remote and connected care is complicated enough for healthcare providers. Who provides which care and when, who bears what responsibility for the quality of care and who pays for which care? Technology should therefore not be the problem. The development of low-code applications may be easier and faster, but not happens automatically. That is why we end this article with 5 tips to be part of the low-code revolution: 

1) Start small and finish big: start with the (agile) development of a working prototype in a pilot and discover the value of low-code development (proof of value);
2) By the patient, not for the patient: design continuously from the patient’s point of view and experiment with the flexibility of low-code development;
3) From doittogether to doityourself: get advice on the right platform, acquire the right low-code competencies and experience and then build them yourself;
4) Complexity is failed simplicity: work under architecture and don’t allow IT to add unnecessary complexity;
5) You go faster alone, you go further together: never develop alone, but learn from each other by working together. 

Contact
Walter Kien
E: walter.kien@igh.com

This article has also been published on: ICT&health

 

IG&H recognised as Partner of the Year EMEA by OutSystems

By Announcement, IGH, News

Exactly 3 years ago IG&H decided that low-code technology could help our clients to make a sustainable impact in their sectors’. After we joined forces with highly experienced teams of platform technology experts in Portugal and the Netherlands, a lot has happened and thanks to smart collaboration it turned out to be a true winning combination. Today IG&H is being recognised as Partner of the Year EMEA by OutSystems in two separate categories. 1. Rainmaker: most Annual Recurring Revenue (ARR). 2. Pioneer: most new logos.

On top of that it is even greater to see smart collaboration with our client Medtronic EMEA is being recognised by an Innovation Award!

A big thank you to everyone that contributed to the pillars of this success.

Mortgage Update | Insights from Q2 2020 | Highest number of mortgages since 2008

By Banking, Mortgage Update, News

Mortgage revenue grows to new record of €38 billion, despite COVID-19 pandemic

Download the IG&H Mortgage Update (in dutch) 

Utrecht, August 6 2020 – Mortgage revenue during the second quarter of 2020 grows by 30% compared to the second quarter in 2019. The number of mortgages grows by 26% to 114 thousand. This is the highest number of mortgages since 2008. The numbers rose among all groups, but people taking out refinancing and additional loans showed the strongest growth. Their numbers rose by 64% compared to the second quarter in 2019. For the first time, the number of mortgages for people refinancing and taking out additional loans exceeds the number of mortgages for new homeowners and transferors.

“The COVID-19 pandemic is yet to negatively affected the Dutch mortgage market. We still see an increase in the number of mortgages for new homeowners and existing homeowners who transfer to a new home. The large increase in the number of mortgages for people refinancing and taking out additional loans shows that the pandemic even seems to have a temporary positive effect on the market” according to Joppe Smit of management consulting firm IG&H.

The average mortgage value continues to grow for new homeowners and transferors (+0,9%). For people taking out refinancing and additional loans, the average mortgage value decreases by 2,8% compared to the previous quarter. Collectively, this explains the decrease of the average mortgage value by 1,3% to €333.000.

People refinancing and taking out additional loans cause strongest growth in 5 years
Mortgage revenue of people refinancing and taking out additional loans grows by 64% in the second quarter of 2020 compared to the same quarter in 2019. Their mortgage revenue of €15 billion encloses 40% of the total market revenue. The number of mortgages grows as well by 64%. This is the strongest growth in 5 years for both revenue and numbers. “It seems that many people take the time to refinance, possibly in anticipation of an increase in interest, or to take out additional loans for renovation during this pandemic. That clearly has a positive effect on the Dutch mortgage market” according to Smit.

Market share Top 3 banks drops to an historical low
The market share of the top-3 banks has decreased to 45,6% this quarter. This decrease by -2,3 percentage points compared to the previous quarter brings their market share to the lowest level since the start of our measurements in 2006. Banks experience the strongest decrease of their market share among people refinancing and taking out additional loans

(-5,8 percentage points). ING experiences the strongest decrease for two consecutive quarters.

Over 4,950 advisors have passed the course for advisors in sustainable housing
Since last quarter, IG&H reports on the progress of industry collective Duurzaam Wonen. They are getting closer to achieving their aim to educate at least 80% of all mortgage advisors in sustainability by the end of 2020. To date, 5,980 advisors have applied, an increase of 21% compared to last quarter. This implies that 60% of all mortgage advisors have now applied.

We wish you great joy in reading this article and would like to invite you to respond!

Joppe Smit,
Director bij IG&H
E: joppe.smit@igh.com
T: 06 2035 2438

Author & data-analysis IG&H mortgage update
Annelies van Putten-Stemfoort (annelies.vanputten@igh.com)

Subscribe to IG&H’s Mortgage Update

Why ethical reasoning should be an essential capability for Data Science teams and two concrete actions to kickstart your team on ethical knowledge 

By Data science, News

Wherever new technology is introduced, ethics and legislation will trail behind the applications. The field of data science cannot be called new anymore from a technical point of view, but it has not yet reached maturity in terms of ethics and legislation. As a result, the field is especially prone to make harmful ethical missteps. 

How do we prevent these missteps right now, while we wait for — or even better: work on — ethical and legislative maturity? 

I propose that the solution lies in taking responsibility as a data scientist yourself. I will give you a brief introduction on data ethics and legislation, before I reach this conclusion. Also, I will share a best-practice from my own team, which gives concrete actions to make your team ethics-ready. 

“But data and models are neutral in itself, why worry about good and bad?” 

If 2012 denoted the kickoff of the golden age of data science applications — through the crowning of data science as the ‘Sexiest job of 21st century’, 2018 might be the age of data ethics. It is the year where the whole world started forming an opinion on how data may and may not be used. 

The Cambridge Analytica goal of influencing politics clearly fell in the ‘may not’ camp. 

This scandal opened up major discussion about the ethics of data use. Multiple articles have since then discussed situations where the bad of algorithms outweighed the good. The many examples include image recognition AI erroneously denoting humans as gorillas, the chatbot Tay which became too offensive for Twitter within 24 hours and male-preferring HR algorithms (which raises the question: is data science the sexiest, or the most sexist job of the 21st century?). 

Clearly, data applications have left neutral ground. 

In addition to — or maybe caused by — attention from the public, large (governmental) organisations such as Googlethe EU and the UN now also see the importance of data ethics. Many ‘guidelines of data/AI/ML’ have been published, which can provide ethical guidance when working with data and analytics. 

It is not necessary to enter the time-consuming endeavour of reading every single one of these. A meta study on 39 different authors of guidelines shows a strong overlap in the following topics: 

1) Privacy
2)
Accountability
3) Safety and security
4) Transparency and explainability
5) Fairness and non-discrimination 

This is a good list of topics to start thinking and reading about. I highly encourage you to deeper investigate these yourselves, as this article will not explain these topics as deeply as their importance deserves. 

Legal governance, are we there yet? 

The discussion on the ethics of data is an important step in the journey towards appropriate data regulation. Ideally, laws are based on shared values, which can be found by thinking and talking about data ethics. To write legislation without prior philosophical contemplation would be like blindly pressing some numbers at a vending machine, and hoping your favourite snack comes out. 

Some first pieces of legislation aimed at the ethics of data are already in place. Think of the GDPR, which regulates data privacy in the EU. Even though this regulation is not (yet) fully capable of strictly governing privacy, it does propel privacy — and data ethics as a whole — to the center of the debate. It is not the endpoint, but an important step in the right direction. 

At this moment, we find ourselves in an in-between situation in the embedding of modern data technology in society: 

  • Technically, we are capable of many potentially worthwhile applications. 
  • Ethically, we are reaching the point we can mostly agree what is and what is not acceptable. 
  • However, legally, we are not in a place where we can suitably ensure that the harmful applications of data are prevented: most data-ethical scandals are solved in the public domain, and not yet in the legal domain. 

Responsibility currently (mostly) rests on the shoulders of Data Scientists 

So, the field of data cannot be ethically governed (yet) through legislation. I think that the most promising alternative is self-regulation by those with the most expertise in the field: data science teams themselves. 

You might argue that self-regulation brings up the problem of partiality, I do however propose it as an in-between solution for the in-between situation we find ourselves in. As soon as legislation on data use is more mature, less–but never zero–self-regulation is necessary. 

Another struggle is that many data scientists find themselves in a split between acting ethically and creating the most accurate model. By taking ethical responsibility, data scientists also receive the responsibility to resolve this tension. 

I am persuadable with the argument that the unethical alternative might be more expensive in terms of money (e.g. GDPR fines) or damage to company image. Your employer or client may be harder to convince. “How to persuade your stakeholders to use data ethically” sounds like a good topic for a future article. 

My proposal has an important consequence for data science teams: next to technical skills, they would also need knowledge on data ethics. This knowledge cannot be assumed to be present automatically, as software firm Anaconda found that just 18% of data science students say they received education on data ethics in their studies. 

Moreover, just a single person with ethical knowledge wouldn’t be enough, every practitioner of data science must have basic skill in identifying potential ethical threats of their work. Otherwise the risk for ethical accidents remains substantial. But how to reach overall ethical knowhow in your team? 

Two concrete actions towards ethical knowledge 

Within my own team, we take a two-step approach: 

1)group-wide discussion on what each finds ethically important when dealing with data and algorithms 

2)construct a group-wide accepted ethical doctrine based on this discussion 

In the first step we educate the group on the current status in data ethics in both academia and business. This includes discussing problems of data ethics in the news, explaining the most prevalent ethical frameworks, and conversation about how ethical problems may arise in daily work. This should enable each individual member to form an opinion on data ethics. 

The team-wide ethical data guidelines constructed in the second step should give our data scientists a strong grounding in identifying potential threats. The guidelines shouldn’t be constructed top down; the individual input that comes out of the group-wide discussions forms a much better basis. This way, general guidelines that represent every data scientist can be constructed. 

The doctrine will not succeed if constructed as a detailed step-by-step list. Instead, it should serve as a general guideline that helps to identify which individual cases should be further discussed. 

Precisely that should be a task of the data scientist: ensure that potentially unethical data usage will not go unnoticed. Unethical usage not only by data scientists, but by all colleagues who may use data in their work. This way, awareness for data ethics is raised, which enables companies to responsibly leverage the power of data. 

In short: start talking about data ethics
We are technically capable of life-changing data applications, however a safety net in the form of legislation is not yet in place. Data scientists walk a tightrope over a deep valley of harmful application, where overall knowledge of ethics acts as the pole that helps them balance. By initiating the proper discussion, your data science team has the tools to prevent expensive ethical missteps. 

As I argue in the article, discussion on data ethics propels the field towards maturity, such that we can arrive at a “rigorous and complex ethical examination” of data science. So, engage in discussion: be critical about this content, form an opinion, talk about it, and change your opinion often as you encounter novel information. This not only makes you a better data scientist; it makes the whole field better. 

 Contact
Tom Jongen
E: Tom.jongen@igh.com

 

IG&H’s contribution to the National Coordination Center for Patient Evacuation (LCPS)

By Health, News

Due to the national increase in patients with COVID-19, the workload of patient care across the Netherlands needed to be spread as effectively as possible. Not only for patients with COVID-19, but for all patients. The aim of the National Coordination Center for Patient Evacuation (LCPS) is to spread the workload and care capacities across hospitals.

LCPS is being led by Bas Leerink and Bart ter Horst. The Dutch Army offers advice and support in the design, organization and operation. They are being strengthened by experts in the field of acute care, logistics, ICT, statistics and crisis management.

Just before the peak in the number of COVID-19-patients, IG&H, together with Erasmus MC, the Ministry of Defence and other partners, coordinated the setup of LCPS with the aim to spread the workload and care capacities across hospitals.

Journalist Mark de Bruijn has recorded the setup of the LCPS and reconstructed it into an exciting documentary.

Contact
E: info@igh.com

 

What Data Science Managers can learn from McDonalds

By Data science, Insurance, News

Insurers and intermediaries digitize their companies more and faster. This has implications for the organizational functions that support this such as the Data Science and AI teams. From our recently conducted Data Science Maturity Quick scan among Dutch Insurers we learn that nearly all companies currently organize their Data Science in the same way, centrally. However, the front runners are now about to transition to a different, hybrid, organizational model. Determining the best organizational model for the Data Science function turns out not to be simple.

How do you organize Data Science and AI as a scalable corporate function, when you can no longer keep it centrally organized and also do not want to switch to a very decentralized, hybrid model?

Three basic models
Actuary expertise and business intelligence have been part of the insurance business for a long time. But since about two decades people with the title Data Scientist started to appear. These professionals usually worked on non-risk use cases, such as in Sales, Marketing & Distribution, Fraud and Customer Service and they wanted to apply the latest, often non-linear, Machine Learning techniques (ML). The introduction of this kind of work often happened very decentralized and scattered throughout the organizations. But as the reputation and expectations of their work grew, these new professionals were often grouped together in central Centers of Competence (CoC).

Fig 1. Three basic models for organizational functions

The CoC model brings some advantages for a new data science function, when compared to the decentralized model. Especially when the company is not yet really functioning like a digital, data driven organization. Five out of six organization in our Maturity Quick Scan have organized their data scientists in a CoC model. However, at some companies, the digital transformation is getting serious and, in that case, strong centralization can result in a capacity bottleneck. Or be the cause of too big of a gap between business and data sience teams in terms of knowledge, priorities and communication.

Switching from a centrally organized model to a more hybrid model is often advised as a best-of-both-worlds solution. This should make it easier to scale up and align knowledge and application of data science closely with the day to day business, while a select number of activities and governance can remain central.

Concerns over hybrid models
Spotify developed a popular hybrid version that has become known as the Spotify model with its Squads, Tribes, Chapters and guilds1. This type of model has become popular among the digital natives and e-commerce companies in this world. But many data science managers in Dutch insurance companies have their concerns about this highly decentralized version of a hybrid model. These concerns are;

  • Data culture and scale of the AI applications may be insufficient still to maintain the needed continuous development and innovation
  • There are often still company-wide use cases yet to be developed that could be realized much more efficiently centrally
  • Next to that, IG&H has identified there is a large gap in data science maturity between corporate insurance and consumer insurance2. A more central organization model can help to narrow that gap

This is why in practice we often choose for a more central version of the hybrid model. But every hybrid (re)organization raises questions such as;

  • How do you keep the complexity of funding and governance down?
  • How do you maintain sufficient control over standards, continuous development and innovation?
  • Which activities and responsibilities do you keep central and which not?

A different perspective; a successful retail business model
Thinking about these questions from a different perspective or from the perspective of a different sector can help to find answers. We do this a lot to solve problems at IG&H. In this article we make an analogy with a good ol’, trusted, business model. In the late fifties of the 20th century fast food giant McDonalds started to conquer the world in rapid pace. The McDonalds concept (branding and methodology) was a proven success, but the explosive, profitable growth was partly made possible by the chosen business model for expansion, the franchise model.

A franchise is a type of business that is operated by an individual(s) known as a franchisee using the trademark, branding and business model of a franchisor. In this business model, there is a legal and commercial relationship between the owner of the company (the franchisor) and the individual (the franchisee). In other words, the franchisee is licensed to use the franchisor’s trade name and operating systems.

In exchange for the rights to use the franchisor’s business model — to sell the product or service and be provided with training, support and operational instructions — the franchisee pays a franchisee fee (known as a royalty) to the franchisor. The franchisee must also sign a contract (franchise agreement) agreeing to operate in accordance with the terms specified in the contract.

A franchise essentially acts as an individual branch of the franchise company.

Fig 2. Illustration of the franchise model
Credits: https://franchisebusinessreview.com/post/franchise-business-model/

The franchise model offers some important benefits to a business concept that help to expand fast and durable;

  • Access to (local) capital
  • Entrepreneurship of local establishments
  • Presence close to local customers
  • Ability to facilitate centrally what locally cannot be realized in a cost-efficient way, or what is an essential part of the success formula (e.g. R&D, Production, Procurement and Marketing)

Each of these qualities can be related to lessons learned and best practices of scaling up Data Science within an organization. As the franchise principle made it possible for McDonalds to rapidly expand into the world, it can also help expand Data Science throughout the company.

Applying franchise model principles
Many insurers desire a more adaptable and entrepreneurial company culture. Ownership, client focus and flexibility are key words. The principle of local entrepreneurship in the franchise model fits very well with this. It encourages both the business responsible as well as the data scientist to work very client-focused and always with a business case to achieve.

Data Science is a broad discipline and it rapidly evolves while it is often not yet embedded well in most companies. This makes a central organizational element very valuable to ensure quality standards, continuous development and innovation. Just like the franchiser in the franchise model often facilitates certain means of production, R&D and best practices. One of the front runners in the Dutch insurance market with respect to organizing their data science is therefore transitioning to franchise-inspired organization model for their data science function.

So, franchise-inspired thinking can be valuable for Data Science organizations that want to move beyond the CoC, but not (yet) want to move to a very decentralized hybrid model like the Spotify model. However, which organizational model and specific choices are the best fit is naturally dependent on multiple variables. Among others; company structure and company culture, the digital maturity of the organization and the maturity of the data science function itself. Because the value of data science and AI are completely dependent on the quality of collaboration with the business and on broad application throughout the company it is vital to objectively assess the current situation and to strike the right balance between centralized and decentralized, with a company-specific touch.

Contact
Mando Rotman
E: mando.rotman@igh.com
Manager Data Science IG&H

Jan-Pieter van der Helm
E: janpieter.vanderhelm@igh.com
Director Financial Services IG&H

1) https://agilescrumgroup.nl/spotify-model/
2) https://www.igh.com/news/page/2/
3) https://franchisebusinessreview.com/post/franchise-business-model/

 

Digitalisation can radically change the debt assistance and administration into a more people-oriented approach

By Banking, News

All signals are on red in the debt sector[1]. The sector is all about people in often difficult situations. More than 2,5 million Dutch households suffer from late payments[2]. About half of those households have structural debt problems. Due to the impact of COVID-19, the number of indebted households is predicted to increase significantly[3]. With digitalisation the more important focus in the debt market is inevitable. Preventing people to obtain income on alternative matters. More cost-efficient administration and focus on the root cause of debt, the household in debt. Digitalisation of the financial administration processes can let the sector undergo a metamorphosis into a person-oriented approach. The ingredients are there.

Social impact and growth potential of the sector
The impact of debt is considerable: for the debtor, creditors, and society at large. The Dutch government wants more people to get out of a hopeless debt situation and gives high priority to this[4]. A rough estimation is that the debt sector in the Netherlands costs the society € 11 billion a year (BKR, 2014)[5].

Figure 1 – Debt assistance decreases while administration by court increases significantly, the market potential is huge. 2020 and 2021 estimated avg. by Deloitte (June 2020)

What is remarkable about the sector figures is a significant increase in the number of administrations compared to debt assistance, which puts considerable extra pressure on the legal system.  Households with debts need the right support. Getting the right support is not easy according the National Ombudsman[6]. It takes an average of eleven days to get in contact with the municipality. The intake asks a lot of trust from people with debts, to share all information and their terse situation. Social and culture believes can make the step towards asking for help even harder.
The causes of debts are a combination of factors and in most times multicomplex circumstances:

  1. Environmental factors (economic situation, complexity of society, structural poverty).
  2. Conscious and unconscious behaviour (motivation, financial knowledge, and skills, but also a feeling: doing what others are doing and unconscious psychological processes).
  3. Unexpected events (life-events such as divorce, unemployment, disability, bankruptcy, etc.).
  4. Personal factors (addictions, mild intellectual disability, psychiatric problems).

What strikes is that the root causes of debt are all social economical and psychological. The current debt assistance and administration is mainly focussing on the financial administration to get control on the settlement and prevention of debt.

Sector growth demands digitalisation to get focus on personal debt causes
Working in the current debt administration sector is tough work. The work mostly consists of mail handling, communication, and financial administration. All this to relieve the ones with debts or to handle their financial administration. Until the debts are settled, someone is tied to the debt relief, and a calculated amount of income to live on each week (VTLB in Dutch). On a high level, debt assistance is characterized by three different phases:

The sector and government create great initiatives to innovate this process. Most innovations are focussed on the exchange of data, such as the data hub of the ‘Dutch association for debt assistance (NVVK)’. For municipalities and private debt assistors, digitalisation of internal processes is more vital than ever. Not only to integrate all data and to handle the foreseen increase of demand, if not to reduce throughput times for clients, improve customer service and reduce operational costs tremendously.

The experience of IG&H in the sector is that a return on investment of less than a year is many times possible. Combining sector initiatives and the current possibilities of technology can change debt assistance significantly. Based on IG&H experiences the intake of the debt process can be up to more than 60% more efficient due the several sector initiatives. A more efficient administrational intake process results in more time to understand the cause and situation of the household in debt.
Examples for the intake part of the process are:

  • The law entry of debt assistance and data exchange, which is going into force on January 1st, 2021, will give debt controllers access to all necessary personal data. Making the intake a lot easier for people in debt and will increase data quality and security. Technology to connect is highly mature technology, on the government side as well on the commercial side with solutions as the ‘Makelaarsuite’ of PinkRoccade Local Government.
  • With PSD2 and solutions as Budlr, Ockto and Buddy Payments the setup of a budget plan and allocating the financial administration can be digitised and set up automatically.
  • Early signalling of debt is trending and as of January 1st, 2021 a duty of municipalities by law. The ‘Dutch association for debt assistance, social banking, and administration (NVVK)’ developed a debt hub to exchange debt data of households. The hub has great potential to have insight into debts and to reach mutual agreements, restructuring or prioritisation.
  • CDD (Customer Due Diligence) or KYC (Know Your Customer) in the debt sector is not as mature as in the banking sector who invest fully in this lately. Aligning with banks in this process to allocated accounts faster and more secure, the improvement can be made on both sides making the client process more trustful. Digital identification and fraud prevention solutions as Slimmer.ai, Hawk.ai, hellosoda.com, and IDNow.io can improve this process easily. The above-mentioned law entry makes the use of DigiD and eID for highly reliable identification an even better solution. Digital identification is the essential start for data interaction.

The main part of the debt process can be much more efficient and as much automated with the following examples. Resulting in a game changer of the debt sector with a focus to coach and support clients on the causes and making them more self-sufficient in their personal household finances.

  • The use of low coding increases the execution, adaptability, and deployment of the debt operation. Low code as OutSystems makes it possible to adapt quickly and have a release and multi-platform application ready in short release phases.
  • Chatbots and AI can support easily in customer contact for most common questions and intelligent dashboarding towards the debt counsellor and customer. Microsoft and OutSystems have mature configurable chatbots available and they are getting better each day.
  • The debt data hub of the NVVK can assist debt mediation and rescheduling, reducing postal mailings significantly.
  • The use of AI in postal mail recognition can relieve the operations work even more. Solutions as Anntac will recognise postal mails after scanning up to 90%.
  • Bank account batching for new accounts and API transaction data for instant payments make payments better and faster for all parties. Solutions as Cashfac with the Ebury bank are there already, filling the gap traditional banks lacks.
  • Most important is an intuitive app and portal for the customer in debt. With the use of AI and a task manager the process can be less complex and faster. Increasing self-sufficiency of the client which is mainly the most important goal of debt assistance.
  • Restructuring loans are trending. Especially with the low ECB-interest a great opportunity to lower and simplify the debt at once for all debtors and creditors for parties with a banking license or partnership with such a party. Last year the amount of restructuring loans increased with 16% to 8.952[7].

The outflow is most important for a sustainable financial future. The earlier mentioned reasons of debt are different. For a large group of households’ financial stability remains difficult. Monitoring and signalling support of AI, and in second phase personal coaching, can be the needed support.
There are several examples available to improve the last part of the debt process:

  • Financial insights are the base for self-sufficiency after the debts. AI and an easy UX makes the adoption and assist of clients very supportive.
  • The early signalling process will keep an eye on new debt risks and can be made into automatic insights and signals towards the client.
  • Financial coaching should be part of a portal and app for clients. Goal is to increase the financial capabilities and decrease any stress factor on finance. Preventing clients to return into the debt processes. There are a lot of solutions on this topic like mijnsofie.nl, samapp.nl and wijgaanhetfikksen.nl and can support clients very well.

Bringing the change together towards a people-oriented approach
It is only symptom control if the main part of the time within debt assistance and administration focusses on operation. There are multiple and easy to implement solutions to digitize the operation. Given that, the focus needs to be on a person-, and situation-oriented approach. The expertise of the debt assistants and administrators changes almost completely towards a people- and cause oriented expertise. In such the debt sector can be more tailormade, fitting in a multidisciplinary approach with different professionals suitable for the cause. Let us innovate and help people.

Contact
Gerwin woelders
E: gerwin.woelders@igh.com

[1] https://nos.nl/artikel/2333348-zorgen-over-oplopende-armoede-door-corona-alle-seinen-staan-op-rood.html
[2] https://www.nibud.nl/beroepsmatig/financiele-problemen/
[3] https://www.rijksoverheid.nl/documenten/kamerstukken/2020/06/08/beantwoording-kamervragen-over-oplopende-armoede-door-coronacrisis
[4] https://www.rijksoverheid.nl/binaries/rijksoverheid/documenten/kamerstukken/2018/05/23/kamerbrief-brede-schuldenaanpak/kamerbrief-brede-schuldenaanpak.pdf
[5] https://www.bkr.nl/home/zakelijk/nieuwsbrieven/nieuwsbrief-december-2014/schulden-kosten-jaarlijks-11-miljard/
[6] https://www.nationaleombudsman.nl/nieuws/2020/toegang-tot-wet-schuldsanering-is-een-hindernisbaan-zonder-finish
[7] https://jaarverslag.nvvk.eu/2019/toelichting-cijfers/index.html

Strategic moves to win post COVID-19 with Data Science 

By Data science, News

The COVID-19 pandemic has shaken up 2020 for many. This year marks a period where uncertainty is suddenly much more common than certainty and it seems like it’s here to stay. In business, we see that although some thrive in uncertainty, many more require thorough recalibration to many possible variants of the new normal. In the field of Data science, we thrive in uncertainty while we rely on statistics and AI as a golden compass to navigate through the fog. In the fog, we are dealing with changing, dynamic consumer demand, while physical customers are more distant than ever before. This counts for many industries and markets: supply chains will operate at a different pace, and capacity management in hospitals is a completely different ball game than before. Having a strong data science capability will be of value to adapt and thrive as a business. For some that means starting at absolute zero, for all it will mean that making the right moves is essential. 

This article gets you going with the five key strategic moves to win in the post COVID-19 game. Furthermore, it lists several must-know techniques and data science terminology that will kickstart you into having the right conversation with your lead data scientist.

1) Gather the troops and move into formation To be able to quantify uncertainty, and to move away from it as much as possible, you will need a data science team. Your data science team will be crucial in all next steps. They will be crucial to leverage the right data, predict what will happen next, help in automation and play a defining role in building high quality customer relationships from a distance. Typically, a well performing data science team includes people with complementary skillsets that include knowledge of at least statistical modelling, AI / ML, data engineering and project & people management. If you are short on people, an upcoming economic turndown may increase availability of talent in the data science arena. 

Even if you already have a team in place, now is the time to review their positioning. Often, the impact of the team is optimal when positioned close to the board room where strategic decision making takes place. From there they can be deployed on high-impact projects where data science will be of additional value besides traditional analyst and BI roles. 

Last but not least, educate all within your company on different levels of data literacy. Look inside the company for anyone that can be retrained now that internal and external demand is shifting. Leverage scale and what is already there, MOOCs deliver great value at a small investment. Build a culture that finds its foundation in data and mathematic reasoning, by demanding insight as key ingredient (instead of just some seasoning) of decision making. The post-COVID-19 world will be different from the one we knew, so make sure your people leverage data to act on now instead of experience in a no longer relevant pre-COVID-19 world. 

2) Plug in the data exhaustIncreasing reliance on data and an active data science team will make your company data-hungry. Although sensitive data should be safeguarded and the right policies should be implemented, you should refrain from the urge of applying a lockdown to parts of your data warehouse. Instead, let creativity thrive and democratize insensitive data.  

So what does this mean? It means that today’s cross-department data should be available for your data science team, and the rest of the organization. Democratization of the data that you already have will increase transparency and enhance collaboration over departments. To make that work in terms of technology, move (part) of your data to the cloud and consider NoSQL solutions like Apache Cassandra or MongoDB to make your data available to many interfaces. Furthermore, actively search and uncover for ‘dark’ data that is only available to some, or never tapped into despite being extremely valueable. Last but not least, measure quality and optimize for speed in your data infrastructure such that applications & data science models generate timely insights that make a high impact. 

Additionally, we see that COVID-19 changes society and individual values and norms. To stay ahead, you will need to rethink what data to consume and record within the company and experiment accordingly. Do not be evil, but rethink and tune your ethical approach on data collection to improve your products and services constantly to the ‘new normal’. 

3) Look beyond the fog
More complete data will allow you to make precise measurements on your advancement to success. But most of your forecasts and (in-production) AI / ML models will be confused by the current situation. They are all trained on pre-COVID19 data, let alone that post-COVID19 data is hardly available (yet). Still, you need to look beyond the fog to steer clear from danger. What are some best practices? 

First of all, make sure that your models are properly implemented. The business and your data science team need to know as soon as possible when accuracy drifts. Suggest your lead data scientist to implement automated retraining of models using solutions such as Kubeflow, Azure ML, or AWS Sagemaker. Although some human intervention may sometimes be required, it ensures that your models are updated regularly using the latest data. 

Second, implement and apply models that require less data or do not need any data. With low data, use simple machine learning models to avoid inaccurate models by e.g. overfitting. Overfitted models fit very well to a small training dataset, but also feature relations that are non-existent in the real world. Talk to your data science teams about RidgeRegression, KNN, or Naïve Bayes to work with less data. 

Third and last, consider generating scenario data that might replicate the future ahead. Together with the business, your data science team might be able to generate several future scenarios. If your data science capability is more advanced, support these scenarios with data by generating the data using a generative deep learning technique (GANs). Previously GAN-driven synthetic data only seemed applicable in the areas of renewables¹, but the lack of data that COVID-19 causes will propel this novel application. Going forward, you can leverage machine learning to determine which scenario resembles the current state of your company, and thus gives the most accurate predictions about the future. 

4)  Automate for efficiency
Although the most apparent post-COVID-19 challenge seems to be to win in uncertainty, winning in operational efficiency should not be overlooked in uncertain times. While markets are volatile, shrinking, and growing in short times, the winning players might be those that play the scaling game well. In this case, we can learn from previous economic downturns where players that focus on technology and data science driven automation will win during and after a recession². 

Data science powered decision-making helps cutting costs by shifting FTEs away from repetitive tasks to more areas where they can be more valuable. Furthermore, it allows you to scale operations faster, which may be beneficial to sales & operations teams needing to outrun your competitors. Automation is a marriage between technology and data science, but beware of the complexity monster. Machine learning is not a silver bullet, most automation problems are solved for 90% by simple data engineering or by evaluating decisions using simple business rules. If you are looking for tools that are strong in workflow automation, discuss the implementation of Airflow, or look into the new-kid-on-the-block: Prefect. 

Automating can be a safe bet if focused on long hanging fruit, where the business case is strong. The complexity is most often in the open identification of these cases. Furthermore, make sure that you educate your staff, and that best practices are being shared. Finally, make sure people win in automating their own responsibilities. Only then they will not fear to lose influence and their joy of work, which is a must in making automation work post-COVID-19. 

5) Build high quality distant customer services and relationships
It seems that for society, keeping distance will remain a key value in a post-COVID-19 world. We see that consumers have been forced into fulfilling their needs at a distance, which creates customer habits and expectations that are likely to stick³. These renewed expectations are very likely to propagate to other industries, and you probably want to prepare for it, even in a 2B setting. 

You can strengthen your digital relationship by interacting on a more personal level with your customers, powered by data science. Practically this may mean predicting which services and products will match personal needs or optimizing availability to specific times and locations. Or, communicating in the language that appeals most to your customers. By plugging in the exhaust (step 2) on customer data your data science team will be able to segment customers using clustering techniques. Going forward, you can apply carefully set up A / B experiments to quickly learn about and optimize online services. 

Getting this to work means querying data science models in real-time e-commerce like conditions. Being fast is crucial. In order to do this, you will need streaming analytics to process and evaluate data in motion, while events are happening⁴. Streaming analytics opens up applying data science models in environments where we were too slow previously, waiting for the data to arrive or the model to generate an answer. Although the streaming analytics set-up of today is more complex than a traditional set-up, this is definitively an area to watch and apply where no other solution will work. 

What you should be doing next
In conclusion, evaluating how each of the five steps can be relevant to your company and position can be of great help to navigate these challenging times. Each step should inspire you in some way and generate material for discussion within your company. We’ve written this article generate more questions but especially to foster discussion. As a first step you might want to reconnect with your lead data scientist to explore next steps. We’re happy to facilitate if you need help. 

Furthermore, we look forward to hearing what your experiences will be in a pursuit to drive success using data science in a post-COVID-19 world. Good luck! 

Contact
Tobias Platenburg
E: tobias.platenburg@igh.com

This article has also been published on Towards Data Science 


 [1] C. Jiang et al., Day-ahead renewable scenario forecasts based on generative adversarial networks (2020), TechrXiv 

[2] Walter Frick, How to Survive a Recession and Thrive Afterward (2019), Harvard Business Review May-June 2019 issue 

[3] Jasper van Rijn, The breaktrough of online shopping as the new standard (2020), IG&H blog series: How Retailers can rebound from the Corona crisis 

[4] Databricks, Streaming Analytics (viewed 2020), Databricks Glossary 

Blog 9 | Digital transformation as a vaccine to cope with the ‘new normal’

By News, Retail

Corona causes dramatic changes in consumer behaviour, forcing retailers to respond with unprecedented agility, at the highest pace. Impact varies across sectors, but even within sectors responses vary widely. Many retailers struggle and seem to get stuck whilst others smoothly ride the new waves and successfully launch new propositions. Why is that?  

In the ‘new normal’ one thing is certain: there is a lot of uncertainty – at least in the nearby future. In many cases, coping with this uncertainty requires an instant and flexible response, to adjust capacity, shift direction and realign offerings. Responses easily take too long, and agility can make the difference. For example, online food retailer Picnic, who shortly after the corona outbreak faced tripled demand. At short notice, they increased delivery capacity in cities, through extending the number of morning delivery timeslots and hiring 500 additional employees. On a different scale, Instacart has nearly doubled their workforce from 200K to 350K employees to offer greater flexibility and their delivery options and allowing customers to place orders two weeks ahead instead of one.

However, agility alone is not enough. Many corona responses need to include innovation, an area which retailers historically do not excel in. Shelves typically have been the same for many years. A lot of retailers are stuck in the paradigm of inefficient and manual processes, inflexible systems, complex governance structures and limited data insights to substantiate innovation. In contrast to pure e-commerce players, who have innovation in their DNA. Digital natives who combine technology and data with a setup of trial and error. It enables them to manoeuvre more quickly. Like traffic application Flitsmeister launching the new service ‘Pickup’ in response to the corona-driven capacity challenges and delays at parcel delivery companies. With Pickup, they call on their 1,7 million application users to help retailers delivering packages. Within 24 hours, ten-thousands of application users registered as potential deliverer and 250 stores showed their interest.

Many factors are in play for to create a setup that accommodates successful and swift innovations. Next to agility, it is also about availability of resources and organizational entrepreneurship to name a few. However, companies like Instacart, Picnic and Flitsmeister illustrate how technology and data can make the difference. Technology as an innovation enabler instead of a constraint and data as the fuel to get it right. For more traditional players who lag these capabilities, now is the time to embark on an accelerated digital transformation journey. We will further elaborate on three key ingredients: technology, data, and agility.

Ingredient 1 – Technology to enable swift innovations


For many retailers, technology is characterized by complex legacy that has been built up over many years. Typically, most attention is geared towards ‘keeping the lights on’. Shifting gears towards innovation requires technology to be more flexible than everCloud and high-performance platforms can help increase time to market in a cost-effective way. 

The current market volatility requires high speed, relevant innovations. Since technology is increasingly wide available, unique and differentiated innovations are crucial to outperform peers. An integrated and seamless customer journey becomes a hygiene factor with personalized offers as a qualifier rather than a differentiator. These developments can only be realised through IT and the integration between IT systems. However, most retailers rely on outdated IT systems and have a fixed IT spend that is not aligned to their operationsUp to 80 percent of retailers’ IT spend is needed for daytoday operations. This leaves only 20% for investments to differentiate or innovate, which should be around 50%To make step change, many retailers face three challengesFirst reducing costs for daytoday operations in a sustainable manner, secondly make the IT landscape resilient for abrupt demand changes and third accommodate agility and fast time to market for differentiated propositions. New technologies are evolving quickly and getting more maturehence opening up new possibilities.  

Many companies still choose on premise solutions. Meanwhile, cloud platforms are creating great possibilities to seamless integrate standard plug-and-play solutions in existing IT landscapes. By using cloud platforms, you create the ability to easily scale up and down, to align IT spend with business volumes through pay-per-use fees. In areas requiring differentiation, flexibility and fast time to market, we observe a vast increase of low-code platforms. Annual market growth of low-code is between 30% and 50%. It can reduce development time with a factor 3 to 6 compared with traditional software development. The philosophy in the program language is to click small, modular building blocks into each other, hence preventing traditional heavy coding. These building blocks then can be tailored to specific needs. Moreover, maintenance cost savings can add up to 50% (complex integrations). Gartner predicts that 65% of all software development will include low-code by 2024. An interesting example is Lidl, who has built an ecommerce platform that is only available at special times of the year, such as during holidays when people are prepared to eat more luxuriously. Low-code in specific made this business case viable. This way, technology becomes an enabler of high pace iterative developments (launch and learn) of innovations.

Ingredient 2 – Data as the fuel to get it right

Already since a long time, data has become the cornerstone for understanding consumer behaviour and making the right decisions to fuel growth ambitions. However, with corona this is becoming more complex since historic data is no longer a reliable start to predict the future. Yet, accurate forecasting has gained importance due to changing behaviour and volatility in demand.   

A data science capability (including predictive analytics) has proven to be a key asset for quite some time now. Both through personalizing offerings to meet demand and through adjusting supply accordingly. One of the most well-known examples is the American supermarket Target. With data, they once predicted a high school girl’s pregnancy based on her spending habits, before her father did. On the demand side, a data science capability is more important than ever due to corona. Demand is erratic and customer loyalty can only be achieved through an extensive understanding of a changing customer journey. Surprising your customer is not easy and competition is fierce. A company like Cool Blue shows how to create fans, with their tagline ‘Everything for a smile’. On the supply side, the pandemic has shown the vulnerability of supply chains and the importance of reliable forecasts. Patterns in replenishment and promotional sales have completely turned around, requiring a different approach to forecasting and often work-arounds to manage supply adequately. A large Dutch do-it-yourself retailer for example, established a manual loophole to process orders, because of limitations in their existing systems.

Unlocking, structuring and enriching data is one thing, but translating it into (actionable) business insights to enhance integral decision making, is another. This hinges upon the data science capability to be seamlessly linked to the business. Carefully selected dataset should be easily and instantly available at different organizational levels. For example, a company CCO is interested in performance across stores, whereas a store manager rather needs commercial insights on how to improve on NPS, conversion and basket size in one specific store. Having the right data available enables determining specific actions per channel, region or customer segment, whilst making substantiated cost/benefit trade-offs. An example comes from Picnic, who uses data analytics to optimize the fill rate of their crates. Data showed them their minimal order amount of 25 euro mostly resulted in the need of a second crate that was often minimally filled. By increasing the minimal order amount to 35 euro, this second crate is well-filled. Most customers already ordered over 25 euros, so this increased threshold did not jeopardize customer satisfaction. A simple example of how data can be translated into actionable insights that create value.

Ingredient 3 – Agility and organizational resilience to keep up with the pace

Digital transformation is not just about data and technology. A company’s culture often is the number one barrier. This includes leadership, way of working and employees’ skillset & mindset. Embedding agility in a company’s culture is key – but not easy. Starting point can be a ‘lab’ set up where data and technology are combined with an agile way of working to accelerate innovations and hence organically change from within. 

Culture evolves and strengthens over time, making it often a barrier to transform. Successful innovation requires an entrepreneurial mindset to be embedded across the organization. Leadership must be able to delegate responsibilities and give guidance based on output rather than throughput. Giving employees ‘trust to act’ is key. For many retailers, this has proven to be a positive side-effect from corona, where employees had to work from home in a more independent setup. Sustaining this when the working situation is ‘back to normal’ will be challenging though, unless it is structurally embedded in your setup. Booking.com does this by authorizing any employee to launch and experiment on millions of customers, without management approval. To strengthen innovation culture, the best companies use culture hacking. Referring to costless small actions, possible to start instantly, leaving big impact on company culture. Google, for example, has the Courageous Penguin award for people who dare to take a risk without knowing the outcome, just like the first Penguin to jump from the iceberg. To encourage new ideas, Ben & Jerry’s introduced the Flavour Graveyard of unsuccessful flavours. Motivating employees to have the courage to look forward and to become ambassadors of innovation is needed to accelerate change.

Agility can only be embedded in the organization when facilitated by the way of working. Short cycles facilitating continuous improvement iterations is needed to speed up the pace of innovation. Again, a great example comes from Booking.com, running more than 1,000 experiments simultaneously. On estimate, they run over 25,000 test each year, to truly understand customer behaviour and respond accordingly whilst most retailers do not go beyond a few dozen per year.

But how do you become agile? A good starting point can be to initiate innovations in an isolated lab set-up first. In this lab, a dedicated team focusses on innovations. Working in sprints of 2 to 3 weeks, prioritizing their activities from a backlog full of innovative ideas and working to a minimal viable product each sprint. It facilitates alignment between all ingredients to increase adaptiveness and guarantee progress, to make the digital transformation really stick.

Conclusion
Any successful digital transformation hinges upon advanced analytics, flexible technology and organizational agility. It is the combination that can drive swift innovations effectively. A comprehensive and urgent agenda in case you lag but embracing and embedding all ingredients is key to make digital transformation work!

Contact
Bram Gilliam

Director at IG&H
E: bram.gilliam@igh.com
T: +31622564054
 

Maarten Vaessen
Partner at IG&H 
E: maarten.vaessen@igh.com T: +31653571666

Author: Myrthe van Hoek (myrthe.vanhoek@igh.com)