Replatforming legacy systems with low-code to increase adaptability and lower costs

By Announcement, News, Pensions

Pensioenfonds Horeca & Catering has undergone a successful transition towards a new administrative platform. The pension fund, known for its good service and low operational costs, aims to lower costs even further while maintaining the capability to adapt to future changes.

Replacing their current legacy employer-administration system is part of Pensioenfonds Horeca & Catering’s transition towards a modern cloudbased technology within the IT landscape. In total, data of 85 thousand employers, approximately 1 million employment contracts and well over 3 years of historical data have been successfully migrated on the new platform. The platform was developed by the pension fund and IG&H, in close collaboration with international techology partners OutSystems and Microsoft. IG&H was responsible for developing and implementing the pensions administration module.

The complete process of implementation and data migration took only seven months, made possible by using OutSystems’ ‘low-code software’ and IG&H’s pension administration module. By combining the two, a major increase in implementation speed and agility was achieved. Moreover, Pensioenfonds Horeca & Catering was able to significantly decrease administrative costs. By allowing the new administration platform to fully function on the Microsoft Azure cloud, Pensioenfonds Horeca & Catering achieved maximum scalability.

Paul Braams (CEO Pensioenfonds Horeca & Catering):
“There are three reasons why we are very satisfied with the IG&H collaboration: the professionality of the people, the collegiality throughout the collaboration and the fact that our partnership goes beyond
temporarily commercial deal. Together we have assessed the future and created a joint vision, which we are working towards step by step. Through IG&H we were introduced to low code as the driving force behind developing new applications. Our first project, replacing the employer-administration is made possible by combining low code and the ‘hard’ pension administration core of IG&H. This combing together makes the project unique: low cost adaptability, allowing us to adjust to all the imminent changes within the pensions landscape, quickly and cheap.”

Frans Liem (partner IG&H):
“Pension administrators are facing the great challenge of replacing and updating their ICT-landscape within the years to come. Together with Pensioenfonds Horeca & Catering, IG&H has developed a futureproof approach by assembling the best people within both the pensions and technology platforms industry. We are pleased that Pensioenfonds Horeca & Catering embraced and successfully implemented our solution. Together with the professionals of Pensioenfonds Horeca & Catering, we were able to deliver on our promise of speed and accuracy in these 
transitions with significant impactThis is important in the context of the large changes that the Dutch pensions sector faces.

Read the interview with Paul Braams and Frans Liem about this topic in PensioenPro.

About IG&H
IG&H is a leading consulting – technology firm specialized in the retail, financial services and healthcare sector. We drive business transformations through alignment of People, Business and Technology and are committed to make a sustainable impact to society. We combine unmatched sector knowledge with digital transformation and technology capabilities, providing services in strategy, digital design and platform technologies – powered by world-class technology partners.  With more than 300 professionals in Europe IG&H is rated as Great Place to Work and committed to deliver and bring continuous innovation.  IG&H has won the Outsystems EMEA partner of the year 2020 award.

Frans Liem

When AI-driven decisions are acceptable to clients ànd Risk & Compliance

By Data science, News

Artificial Intelligence (AI) algorithms are commonly used in many sectors. Yet, many financial services companies still apply AI very limited. Especially for those decisions that could very significantly impact the business results. Uncertainty about the acceptance of its uses by internal and external stakeholders is a main reason. What is this uncertainty and what handles can help? We offer a client-centric framework for such handles and illustrate these with one of our projects. The framework is composed from the perspective of the end customer, but also helps specifically to meet internal stakeholders as well.

Potential for more value with less work
Specialists like underwriters, risk experts and insurance claim handlers spent a lot of time on repetitive assessments and decisions. When these points in the business process can be automated the specialists can focus on more challenging activities and provide the clients with much added value. Often rules-based decision models are composed first, but these have their limitations. From experience we know that decision models augmented with AI can further increase the straight-through-processing ratio and make the process more accurate and consistent.

Two parties are cause for uncertainty; clients and Risk & Compliance
Those AI models that can most contribute to business results can also potentially most impact clients. An AI model that wrongly assesses credit worthiness or insurance risk can of course cause much more harm than a model that wrongly assesses online click behavior or cross-sell opportunity. These high-stakes models are therefore very much in the attention of several internal and external stakeholders such as; regulators, risk officers, clients and client representatives.

Because of this Risk & Compliance teams often place strict demands on model validation and monitoring. The more complex and more impactful the model, the more difficult it is to meet those demands. Many organisations also do not have much experience with this. Next to that, financial services providers are uncertain whether their clients and intermediaries will accept the outcomes of these strongly differentiating, high-impact models. These uncertainties in both front office and back office result in resistance and slow down innovation.

Client acceptance as the leading perspective
The framework below provides guidance to get AI accepted by both internal and external stakeholders. It is composed from the client perspective, so it appeals more and provides more energy compared to only thinking in terms of rules and obligations.

In short it comes down to this: As a client you want to understand how a (automated) decision was reached; you want to feel it’s a decision you can live with; and you want to be able to work with it. We elaborated on these points.

1.     The outcome must be explainable
You must be told what data was actually used and how that data led to the outcome. But to be really considered as explainable the explanation of the outcome must also be perceived as ‘logical’. In other words, the explanations must align to our understanding of how the world works.
Of course, the model must also be consistent, such that outcomes and explanations for similar cases do not differ much.

2.     The explanation must be acceptable
The data used must not be perceived a violation of privacy and so too for the insights obtained from it (using a predicted possible pregnancy, or upcoming company ownership transfer, deducted from a change in transaction patterns is not appreciated by many clients).
Next to this, the model’s outcomes need to be unbiased towards vulnerable groups, or towards sensitive characteristics such as gender or ethnicity of individuals or in the composition in a labor force.

3.     With a sense of control and benefit for the client
To provide clients with a sense of satisfaction, even in the case of a negative outcome, they must experience a sense of control. This requires the possibility of human intervention in the decision process ànd that the AI model is predictably influenceable. This last requirement allows explaining the client what she can do herself to obtain a better outcome. Of course, it is important that ‘good behavior’ subsequently does indeed lead to better scores and lower interests and rates as a result.

When a decision model can realize the perceptions above, there does not need to be much uncertainty about whether clients will accept the application of AI. Naturally, the service provider must still be able to communicate all of this well enough so that clients are informed complete and in such a way that the above perceptions are indeed effectively transferred.

Killing two birds with “one” stone
Underneath the three client perceptions are quite a few requirements. However, we do have the technology and methodology to realize these. The good news is also that once you realize the three client perceptions you will be able to cover almost anything that Risk & Compliance may need to demand. Because in order to make AI models explainable and acceptable to clients and to have control over the outcomes you must apply specific methods and reach a certain level of control and transparency. With it, you can also meet the Risk & Compliance demands.

So the next time you contemplate a solution to apply AI in high-impact decisions only three questions are relevant: “Can we explain this to the client?”, “Will he/she find it acceptable?” and “Will there be a certain level of control for the client over the outcome?”.

This will reduce much uncertainty and many puzzle pieces will fall into place. The client perspective energizes most professionals more than the necessity of following difficult rules.

We are open to discussing all sorts of matters that involve data science and AI. Especially those that touches upon organisational aspects. Feel free to contact us for a cup of coffee!

Mando Rotman

Next-gen core platform for debt relief and administration

By Banking, Clientcases

What they wanted
With increasing challenges of people struggling with debt, our client asked us to build a next-gen core platform to handle their operations and client interaction. This market-leading company experienced the need to maintain their competitive edge and reducing the costs to assist and help people in debt even better. With a capricious IT transformation past for the last 20 years, fast and high-end delivery was essential. Their financial and client administration process was highly manual and especially the postal and client interaction required a lot of back-and-forth, precious time and lots of information.

What we did
With all the requested requirements already documented, we started with a sizing in OutSystems. With this sizing we made a precise estimation of the work needed to be delivered. After a highly successful proof of concept containing the most important operational functions, we started building towards a first production deployment. Based on a Microsoft Azure cloud we integrated many third-party solutions to innovate and accelerate. By integrating with a new specialised DMS and a new innovative payment and bank accounts provider these processes were improved significantly. Delivery and innovation were the main driver for success as well as, harvesting value and lessons learned, preparing, and realizing the full IT with premium services, process, and organizational change.

What we achieved
Within a few months we build a supreme proof of concept, directly developed further towards a first production release for a small team. Growing in functionality the core platform was deployed step by step in the whole organisation. As IG&H we doubled the given organisational goals. The new core platform not only accelerates the processes and reduces costs; it also provides whole new capabilities. The focus of debt relief and administration changed towards a more people-oriented approach. Administration was the main workload, with the new core platform and innovations that changed, giving the employees more space to dive into the root causes of client and their debt struggle.

What they said
“The highest code quality combined with fast delivery we ever encountered in comparable large projects”

Award winning data driven mortgages

By Banking, Clientcases

What they wanted
To apply for a mortgage, you need to hand over dozens of documents. From salary slip till debt proof and a copy of your passport. The processing of all these documents takes a great deal of time and is more susceptible to fraud and error. This can be done smarter and better with data. To accelerate the handling time and decrease the number of errors, a leading mortgage lender asked us to make several processes fully data driven. In this way the lender became ready for the future.

What we did
We helped our client to implement data driven solutions to automate their mortgage process. Based on our extensive experience of delivering full straight through processing solutions, we helped our client to select the appropriate partners to process the relevant client-data. In short this came down to three main parts. The first part was the selection and integration of a tool (i.e. iWize and Ockto) to make direct data delivery by clients possible. The second part is related tot the selection of the right service provider which could directly process data into the back-office based on HDN-standards. The third part is about using government registers as a source of client data to increase the data quality and thereby complying with ever increasing due diligence regulations.

What we achieved
By taking a step by step approach, digitizing each process at a time, we were able to feed the agile sprints cycles in a controllable way so that the project was finished in time. Next to successfully transforming documents to data, we also helped our client coping with he impact of this change on risk management and compliance. Since most mortgages are applied for with the help of an intermediary, we successfully integrated the solution into an advisory package (in dutch: adviespakket). Now clients only have to use DigiD to log into the app and deliver the required data. After the data is checked, it is sent directly to the mortgage lender. The mortgage provider is now automating more and more processes and is becoming more and more data driven. In 2020 our client received an award for this innovation.

What they said
“A party that really has a lot of knowledge of the sector and knows what the most exciting developments are”. – CEO of a top-10 mortgage provider.

IG&H recognised as Partner of the Year EMEA by OutSystems

By Announcement, IGH, News

Exactly 3 years ago IG&H decided that low-code technology could help our clients to make a sustainable impact in their sectors’. After we joined forces with highly experienced teams of platform technology experts in Portugal and the Netherlands, a lot has happened and thanks to smart collaboration it turned out to be a true winning combination. Today IG&H is being recognised as Partner of the Year EMEA by OutSystems in two separate categories. 1. Rainmaker: most Annual Recurring Revenue (ARR). 2. Pioneer: most new logos.

On top of that it is even greater to see smart collaboration with our client Medtronic EMEA is being recognised by an Innovation Award!

A big thank you to everyone that contributed to the pillars of this success.

Defining the digital transformation blueprint for a Dutch insurer

By Banking, Clientcases

What they wanted
Our client created a growth strategy for the next 5 years, which exposed that the company needed to make a giant (digital) transformation to implement this strategy. To enable the organisation and IT to realise this strategy a decent design for the business model, operating model and IT were required.

What we did
As no time could be lost, an interconnected programme with three workstreams was created. Each workstream was jointly lead by an IG&H expert and one workstream owner by the client to ensure transmission of knowledge, buy-in, and fun. Deliverables were carefully aligned to create a plan with logic that carefully built upon all components.

What we achieved
In five months’ time a blueprint for the entire insurance operation was detailed. Market insights that were previously not available to the client were used, templates that enabled them to structure all blueprints are still used after IG&H has left and most importantly the workstream owners of the client could execute the plans that were created. The total transformation will take over 2 years time, but will result in significant improvement in customer service while decreasing costs by 10%.

What they said
“We are very satisfied. Excellent consistency across the workstreams, all deadlines were met and all stakeholders agreed.”

Mortgage Update | Insights from Q2 2020 | Highest number of mortgages since 2008

By Banking, Mortgage Update, News

Mortgage revenue grows to new record of €38 billion, despite COVID-19 pandemic

Download the IG&H Mortgage Update (in dutch) 

Utrecht, August 6 2020 – Mortgage revenue during the second quarter of 2020 grows by 30% compared to the second quarter in 2019. The number of mortgages grows by 26% to 114 thousand. This is the highest number of mortgages since 2008. The numbers rose among all groups, but people taking out refinancing and additional loans showed the strongest growth. Their numbers rose by 64% compared to the second quarter in 2019. For the first time, the number of mortgages for people refinancing and taking out additional loans exceeds the number of mortgages for new homeowners and transferors.

“The COVID-19 pandemic is yet to negatively affected the Dutch mortgage market. We still see an increase in the number of mortgages for new homeowners and existing homeowners who transfer to a new home. The large increase in the number of mortgages for people refinancing and taking out additional loans shows that the pandemic even seems to have a temporary positive effect on the market” according to Joppe Smit of management consulting firm IG&H.

The average mortgage value continues to grow for new homeowners and transferors (+0,9%). For people taking out refinancing and additional loans, the average mortgage value decreases by 2,8% compared to the previous quarter. Collectively, this explains the decrease of the average mortgage value by 1,3% to €333.000.

People refinancing and taking out additional loans cause strongest growth in 5 years
Mortgage revenue of people refinancing and taking out additional loans grows by 64% in the second quarter of 2020 compared to the same quarter in 2019. Their mortgage revenue of €15 billion encloses 40% of the total market revenue. The number of mortgages grows as well by 64%. This is the strongest growth in 5 years for both revenue and numbers. “It seems that many people take the time to refinance, possibly in anticipation of an increase in interest, or to take out additional loans for renovation during this pandemic. That clearly has a positive effect on the Dutch mortgage market” according to Smit.

Market share Top 3 banks drops to an historical low
The market share of the top-3 banks has decreased to 45,6% this quarter. This decrease by -2,3 percentage points compared to the previous quarter brings their market share to the lowest level since the start of our measurements in 2006. Banks experience the strongest decrease of their market share among people refinancing and taking out additional loans

(-5,8 percentage points). ING experiences the strongest decrease for two consecutive quarters.

Over 4,950 advisors have passed the course for advisors in sustainable housing
Since last quarter, IG&H reports on the progress of industry collective Duurzaam Wonen. They are getting closer to achieving their aim to educate at least 80% of all mortgage advisors in sustainability by the end of 2020. To date, 5,980 advisors have applied, an increase of 21% compared to last quarter. This implies that 60% of all mortgage advisors have now applied.

We wish you great joy in reading this article and would like to invite you to respond!

Joppe Smit,
Director bij IG&H
T: 06 2035 2438

Author & data-analysis IG&H mortgage update
Annelies van Putten-Stemfoort (

Subscribe to IG&H’s Mortgage Update

Why ethical reasoning should be an essential capability for Data Science teams and two concrete actions to kickstart your team on ethical knowledge 

By Data science, News

Wherever new technology is introduced, ethics and legislation will trail behind the applications. The field of data science cannot be called new anymore from a technical point of view, but it has not yet reached maturity in terms of ethics and legislation. As a result, the field is especially prone to make harmful ethical missteps. 

How do we prevent these missteps right now, while we wait for — or even better: work on — ethical and legislative maturity? 

I propose that the solution lies in taking responsibility as a data scientist yourself. I will give you a brief introduction on data ethics and legislation, before I reach this conclusion. Also, I will share a best-practice from my own team, which gives concrete actions to make your team ethics-ready. 

“But data and models are neutral in itself, why worry about good and bad?” 

If 2012 denoted the kickoff of the golden age of data science applications — through the crowning of data science as the ‘Sexiest job of 21st century’, 2018 might be the age of data ethics. It is the year where the whole world started forming an opinion on how data may and may not be used. 

The Cambridge Analytica goal of influencing politics clearly fell in the ‘may not’ camp. 

This scandal opened up major discussion about the ethics of data use. Multiple articles have since then discussed situations where the bad of algorithms outweighed the good. The many examples include image recognition AI erroneously denoting humans as gorillas, the chatbot Tay which became too offensive for Twitter within 24 hours and male-preferring HR algorithms (which raises the question: is data science the sexiest, or the most sexist job of the 21st century?). 

Clearly, data applications have left neutral ground. 

In addition to — or maybe caused by — attention from the public, large (governmental) organisations such as Googlethe EU and the UN now also see the importance of data ethics. Many ‘guidelines of data/AI/ML’ have been published, which can provide ethical guidance when working with data and analytics. 

It is not necessary to enter the time-consuming endeavour of reading every single one of these. A meta study on 39 different authors of guidelines shows a strong overlap in the following topics: 

1) Privacy
3) Safety and security
4) Transparency and explainability
5) Fairness and non-discrimination 

This is a good list of topics to start thinking and reading about. I highly encourage you to deeper investigate these yourselves, as this article will not explain these topics as deeply as their importance deserves. 

Legal governance, are we there yet? 

The discussion on the ethics of data is an important step in the journey towards appropriate data regulation. Ideally, laws are based on shared values, which can be found by thinking and talking about data ethics. To write legislation without prior philosophical contemplation would be like blindly pressing some numbers at a vending machine, and hoping your favourite snack comes out. 

Some first pieces of legislation aimed at the ethics of data are already in place. Think of the GDPR, which regulates data privacy in the EU. Even though this regulation is not (yet) fully capable of strictly governing privacy, it does propel privacy — and data ethics as a whole — to the center of the debate. It is not the endpoint, but an important step in the right direction. 

At this moment, we find ourselves in an in-between situation in the embedding of modern data technology in society: 

  • Technically, we are capable of many potentially worthwhile applications. 
  • Ethically, we are reaching the point we can mostly agree what is and what is not acceptable. 
  • However, legally, we are not in a place where we can suitably ensure that the harmful applications of data are prevented: most data-ethical scandals are solved in the public domain, and not yet in the legal domain. 

Responsibility currently (mostly) rests on the shoulders of Data Scientists 

So, the field of data cannot be ethically governed (yet) through legislation. I think that the most promising alternative is self-regulation by those with the most expertise in the field: data science teams themselves. 

You might argue that self-regulation brings up the problem of partiality, I do however propose it as an in-between solution for the in-between situation we find ourselves in. As soon as legislation on data use is more mature, less–but never zero–self-regulation is necessary. 

Another struggle is that many data scientists find themselves in a split between acting ethically and creating the most accurate model. By taking ethical responsibility, data scientists also receive the responsibility to resolve this tension. 

I am persuadable with the argument that the unethical alternative might be more expensive in terms of money (e.g. GDPR fines) or damage to company image. Your employer or client may be harder to convince. “How to persuade your stakeholders to use data ethically” sounds like a good topic for a future article. 

My proposal has an important consequence for data science teams: next to technical skills, they would also need knowledge on data ethics. This knowledge cannot be assumed to be present automatically, as software firm Anaconda found that just 18% of data science students say they received education on data ethics in their studies. 

Moreover, just a single person with ethical knowledge wouldn’t be enough, every practitioner of data science must have basic skill in identifying potential ethical threats of their work. Otherwise the risk for ethical accidents remains substantial. But how to reach overall ethical knowhow in your team? 

Two concrete actions towards ethical knowledge 

Within my own team, we take a two-step approach: 

1)group-wide discussion on what each finds ethically important when dealing with data and algorithms 

2)construct a group-wide accepted ethical doctrine based on this discussion 

In the first step we educate the group on the current status in data ethics in both academia and business. This includes discussing problems of data ethics in the news, explaining the most prevalent ethical frameworks, and conversation about how ethical problems may arise in daily work. This should enable each individual member to form an opinion on data ethics. 

The team-wide ethical data guidelines constructed in the second step should give our data scientists a strong grounding in identifying potential threats. The guidelines shouldn’t be constructed top down; the individual input that comes out of the group-wide discussions forms a much better basis. This way, general guidelines that represent every data scientist can be constructed. 

The doctrine will not succeed if constructed as a detailed step-by-step list. Instead, it should serve as a general guideline that helps to identify which individual cases should be further discussed. 

Precisely that should be a task of the data scientist: ensure that potentially unethical data usage will not go unnoticed. Unethical usage not only by data scientists, but by all colleagues who may use data in their work. This way, awareness for data ethics is raised, which enables companies to responsibly leverage the power of data. 

In short: start talking about data ethics
We are technically capable of life-changing data applications, however a safety net in the form of legislation is not yet in place. Data scientists walk a tightrope over a deep valley of harmful application, where overall knowledge of ethics acts as the pole that helps them balance. By initiating the proper discussion, your data science team has the tools to prevent expensive ethical missteps. 

As I argue in the article, discussion on data ethics propels the field towards maturity, such that we can arrive at a “rigorous and complex ethical examination” of data science. So, engage in discussion: be critical about this content, form an opinion, talk about it, and change your opinion often as you encounter novel information. This not only makes you a better data scientist; it makes the whole field better. 

Tom Jongen


What Data Science Managers can learn from McDonalds

By Data science, Insurance, News

Insurers and intermediaries digitize their companies more and faster. This has implications for the organizational functions that support this such as the Data Science and AI teams. From our recently conducted Data Science Maturity Quick scan among Dutch Insurers we learn that nearly all companies currently organize their Data Science in the same way, centrally. However, the front runners are now about to transition to a different, hybrid, organizational model. Determining the best organizational model for the Data Science function turns out not to be simple.

How do you organize Data Science and AI as a scalable corporate function, when you can no longer keep it centrally organized and also do not want to switch to a very decentralized, hybrid model?

Three basic models
Actuary expertise and business intelligence have been part of the insurance business for a long time. But since about two decades people with the title Data Scientist started to appear. These professionals usually worked on non-risk use cases, such as in Sales, Marketing & Distribution, Fraud and Customer Service and they wanted to apply the latest, often non-linear, Machine Learning techniques (ML). The introduction of this kind of work often happened very decentralized and scattered throughout the organizations. But as the reputation and expectations of their work grew, these new professionals were often grouped together in central Centers of Competence (CoC).

Fig 1. Three basic models for organizational functions

The CoC model brings some advantages for a new data science function, when compared to the decentralized model. Especially when the company is not yet really functioning like a digital, data driven organization. Five out of six organization in our Maturity Quick Scan have organized their data scientists in a CoC model. However, at some companies, the digital transformation is getting serious and, in that case, strong centralization can result in a capacity bottleneck. Or be the cause of too big of a gap between business and data sience teams in terms of knowledge, priorities and communication.

Switching from a centrally organized model to a more hybrid model is often advised as a best-of-both-worlds solution. This should make it easier to scale up and align knowledge and application of data science closely with the day to day business, while a select number of activities and governance can remain central.

Concerns over hybrid models
Spotify developed a popular hybrid version that has become known as the Spotify model with its Squads, Tribes, Chapters and guilds1. This type of model has become popular among the digital natives and e-commerce companies in this world. But many data science managers in Dutch insurance companies have their concerns about this highly decentralized version of a hybrid model. These concerns are;

  • Data culture and scale of the AI applications may be insufficient still to maintain the needed continuous development and innovation
  • There are often still company-wide use cases yet to be developed that could be realized much more efficiently centrally
  • Next to that, IG&H has identified there is a large gap in data science maturity between corporate insurance and consumer insurance2. A more central organization model can help to narrow that gap

This is why in practice we often choose for a more central version of the hybrid model. But every hybrid (re)organization raises questions such as;

  • How do you keep the complexity of funding and governance down?
  • How do you maintain sufficient control over standards, continuous development and innovation?
  • Which activities and responsibilities do you keep central and which not?

A different perspective; a successful retail business model
Thinking about these questions from a different perspective or from the perspective of a different sector can help to find answers. We do this a lot to solve problems at IG&H. In this article we make an analogy with a good ol’, trusted, business model. In the late fifties of the 20th century fast food giant McDonalds started to conquer the world in rapid pace. The McDonalds concept (branding and methodology) was a proven success, but the explosive, profitable growth was partly made possible by the chosen business model for expansion, the franchise model.

A franchise is a type of business that is operated by an individual(s) known as a franchisee using the trademark, branding and business model of a franchisor. In this business model, there is a legal and commercial relationship between the owner of the company (the franchisor) and the individual (the franchisee). In other words, the franchisee is licensed to use the franchisor’s trade name and operating systems.

In exchange for the rights to use the franchisor’s business model — to sell the product or service and be provided with training, support and operational instructions — the franchisee pays a franchisee fee (known as a royalty) to the franchisor. The franchisee must also sign a contract (franchise agreement) agreeing to operate in accordance with the terms specified in the contract.

A franchise essentially acts as an individual branch of the franchise company.

Fig 2. Illustration of the franchise model

The franchise model offers some important benefits to a business concept that help to expand fast and durable;

  • Access to (local) capital
  • Entrepreneurship of local establishments
  • Presence close to local customers
  • Ability to facilitate centrally what locally cannot be realized in a cost-efficient way, or what is an essential part of the success formula (e.g. R&D, Production, Procurement and Marketing)

Each of these qualities can be related to lessons learned and best practices of scaling up Data Science within an organization. As the franchise principle made it possible for McDonalds to rapidly expand into the world, it can also help expand Data Science throughout the company.

Applying franchise model principles
Many insurers desire a more adaptable and entrepreneurial company culture. Ownership, client focus and flexibility are key words. The principle of local entrepreneurship in the franchise model fits very well with this. It encourages both the business responsible as well as the data scientist to work very client-focused and always with a business case to achieve.

Data Science is a broad discipline and it rapidly evolves while it is often not yet embedded well in most companies. This makes a central organizational element very valuable to ensure quality standards, continuous development and innovation. Just like the franchiser in the franchise model often facilitates certain means of production, R&D and best practices. One of the front runners in the Dutch insurance market with respect to organizing their data science is therefore transitioning to franchise-inspired organization model for their data science function.

So, franchise-inspired thinking can be valuable for Data Science organizations that want to move beyond the CoC, but not (yet) want to move to a very decentralized hybrid model like the Spotify model. However, which organizational model and specific choices are the best fit is naturally dependent on multiple variables. Among others; company structure and company culture, the digital maturity of the organization and the maturity of the data science function itself. Because the value of data science and AI are completely dependent on the quality of collaboration with the business and on broad application throughout the company it is vital to objectively assess the current situation and to strike the right balance between centralized and decentralized, with a company-specific touch.

Mando Rotman
Manager Data Science IG&H

Jan-Pieter van der Helm
Director Financial Services IG&H



Strategic moves to win post COVID-19 with Data Science 

By Data science, News

The COVID-19 pandemic has shaken up 2020 for many. This year marks a period where uncertainty is suddenly much more common than certainty and it seems like it’s here to stay. In business, we see that although some thrive in uncertainty, many more require thorough recalibration to many possible variants of the new normal. In the field of Data science, we thrive in uncertainty while we rely on statistics and AI as a golden compass to navigate through the fog. In the fog, we are dealing with changing, dynamic consumer demand, while physical customers are more distant than ever before. This counts for many industries and markets: supply chains will operate at a different pace, and capacity management in hospitals is a completely different ball game than before. Having a strong data science capability will be of value to adapt and thrive as a business. For some that means starting at absolute zero, for all it will mean that making the right moves is essential. 

This article gets you going with the five key strategic moves to win in the post COVID-19 game. Furthermore, it lists several must-know techniques and data science terminology that will kickstart you into having the right conversation with your lead data scientist.

1) Gather the troops and move into formation To be able to quantify uncertainty, and to move away from it as much as possible, you will need a data science team. Your data science team will be crucial in all next steps. They will be crucial to leverage the right data, predict what will happen next, help in automation and play a defining role in building high quality customer relationships from a distance. Typically, a well performing data science team includes people with complementary skillsets that include knowledge of at least statistical modelling, AI / ML, data engineering and project & people management. If you are short on people, an upcoming economic turndown may increase availability of talent in the data science arena. 

Even if you already have a team in place, now is the time to review their positioning. Often, the impact of the team is optimal when positioned close to the board room where strategic decision making takes place. From there they can be deployed on high-impact projects where data science will be of additional value besides traditional analyst and BI roles. 

Last but not least, educate all within your company on different levels of data literacy. Look inside the company for anyone that can be retrained now that internal and external demand is shifting. Leverage scale and what is already there, MOOCs deliver great value at a small investment. Build a culture that finds its foundation in data and mathematic reasoning, by demanding insight as key ingredient (instead of just some seasoning) of decision making. The post-COVID-19 world will be different from the one we knew, so make sure your people leverage data to act on now instead of experience in a no longer relevant pre-COVID-19 world. 

2) Plug in the data exhaustIncreasing reliance on data and an active data science team will make your company data-hungry. Although sensitive data should be safeguarded and the right policies should be implemented, you should refrain from the urge of applying a lockdown to parts of your data warehouse. Instead, let creativity thrive and democratize insensitive data.  

So what does this mean? It means that today’s cross-department data should be available for your data science team, and the rest of the organization. Democratization of the data that you already have will increase transparency and enhance collaboration over departments. To make that work in terms of technology, move (part) of your data to the cloud and consider NoSQL solutions like Apache Cassandra or MongoDB to make your data available to many interfaces. Furthermore, actively search and uncover for ‘dark’ data that is only available to some, or never tapped into despite being extremely valueable. Last but not least, measure quality and optimize for speed in your data infrastructure such that applications & data science models generate timely insights that make a high impact. 

Additionally, we see that COVID-19 changes society and individual values and norms. To stay ahead, you will need to rethink what data to consume and record within the company and experiment accordingly. Do not be evil, but rethink and tune your ethical approach on data collection to improve your products and services constantly to the ‘new normal’. 

3) Look beyond the fog
More complete data will allow you to make precise measurements on your advancement to success. But most of your forecasts and (in-production) AI / ML models will be confused by the current situation. They are all trained on pre-COVID19 data, let alone that post-COVID19 data is hardly available (yet). Still, you need to look beyond the fog to steer clear from danger. What are some best practices? 

First of all, make sure that your models are properly implemented. The business and your data science team need to know as soon as possible when accuracy drifts. Suggest your lead data scientist to implement automated retraining of models using solutions such as Kubeflow, Azure ML, or AWS Sagemaker. Although some human intervention may sometimes be required, it ensures that your models are updated regularly using the latest data. 

Second, implement and apply models that require less data or do not need any data. With low data, use simple machine learning models to avoid inaccurate models by e.g. overfitting. Overfitted models fit very well to a small training dataset, but also feature relations that are non-existent in the real world. Talk to your data science teams about RidgeRegression, KNN, or Naïve Bayes to work with less data. 

Third and last, consider generating scenario data that might replicate the future ahead. Together with the business, your data science team might be able to generate several future scenarios. If your data science capability is more advanced, support these scenarios with data by generating the data using a generative deep learning technique (GANs). Previously GAN-driven synthetic data only seemed applicable in the areas of renewables¹, but the lack of data that COVID-19 causes will propel this novel application. Going forward, you can leverage machine learning to determine which scenario resembles the current state of your company, and thus gives the most accurate predictions about the future. 

4)  Automate for efficiency
Although the most apparent post-COVID-19 challenge seems to be to win in uncertainty, winning in operational efficiency should not be overlooked in uncertain times. While markets are volatile, shrinking, and growing in short times, the winning players might be those that play the scaling game well. In this case, we can learn from previous economic downturns where players that focus on technology and data science driven automation will win during and after a recession². 

Data science powered decision-making helps cutting costs by shifting FTEs away from repetitive tasks to more areas where they can be more valuable. Furthermore, it allows you to scale operations faster, which may be beneficial to sales & operations teams needing to outrun your competitors. Automation is a marriage between technology and data science, but beware of the complexity monster. Machine learning is not a silver bullet, most automation problems are solved for 90% by simple data engineering or by evaluating decisions using simple business rules. If you are looking for tools that are strong in workflow automation, discuss the implementation of Airflow, or look into the new-kid-on-the-block: Prefect. 

Automating can be a safe bet if focused on long hanging fruit, where the business case is strong. The complexity is most often in the open identification of these cases. Furthermore, make sure that you educate your staff, and that best practices are being shared. Finally, make sure people win in automating their own responsibilities. Only then they will not fear to lose influence and their joy of work, which is a must in making automation work post-COVID-19. 

5) Build high quality distant customer services and relationships
It seems that for society, keeping distance will remain a key value in a post-COVID-19 world. We see that consumers have been forced into fulfilling their needs at a distance, which creates customer habits and expectations that are likely to stick³. These renewed expectations are very likely to propagate to other industries, and you probably want to prepare for it, even in a 2B setting. 

You can strengthen your digital relationship by interacting on a more personal level with your customers, powered by data science. Practically this may mean predicting which services and products will match personal needs or optimizing availability to specific times and locations. Or, communicating in the language that appeals most to your customers. By plugging in the exhaust (step 2) on customer data your data science team will be able to segment customers using clustering techniques. Going forward, you can apply carefully set up A / B experiments to quickly learn about and optimize online services. 

Getting this to work means querying data science models in real-time e-commerce like conditions. Being fast is crucial. In order to do this, you will need streaming analytics to process and evaluate data in motion, while events are happening⁴. Streaming analytics opens up applying data science models in environments where we were too slow previously, waiting for the data to arrive or the model to generate an answer. Although the streaming analytics set-up of today is more complex than a traditional set-up, this is definitively an area to watch and apply where no other solution will work. 

What you should be doing next
In conclusion, evaluating how each of the five steps can be relevant to your company and position can be of great help to navigate these challenging times. Each step should inspire you in some way and generate material for discussion within your company. We’ve written this article generate more questions but especially to foster discussion. As a first step you might want to reconnect with your lead data scientist to explore next steps. We’re happy to facilitate if you need help. 

Furthermore, we look forward to hearing what your experiences will be in a pursuit to drive success using data science in a post-COVID-19 world. Good luck! 

Tobias Platenburg

This article has also been published on Towards Data Science 

 [1] C. Jiang et al., Day-ahead renewable scenario forecasts based on generative adversarial networks (2020), TechrXiv 

[2] Walter Frick, How to Survive a Recession and Thrive Afterward (2019), Harvard Business Review May-June 2019 issue 

[3] Jasper van Rijn, The breaktrough of online shopping as the new standard (2020), IG&H blog series: How Retailers can rebound from the Corona crisis 

[4] Databricks, Streaming Analytics (viewed 2020), Databricks Glossary