Overcoming the human factor: exchange and augment data to improve an undervalued asset

Data should be vital to the insurance industry, but its potential has yet to be fully exploited, say software specialists at Morning Data.

Data should be the lifeblood of the insurance sector, given that this is an industry reliant on coherent and cohesive information.

But the industry has, some say, been slow to embrace the endless possibilities. Kirstin Duffield, managing director and chief executive officer, and Paul Buckle, commercial and client services director, of Morning Data came to the Re/insurance Lounge, the on-demand platform for interviews and panel discussions with industry leaders, to discuss their work in helping the industry improve how it works in this field.

London-based Morning Data has specialised in software and service solutions for the global insurance industry since 1983. Its two core products are Novus and Helix—the former is a end-to-end administration platform for brokers, managing general agents, and coverholders, while the latter performs a similar function for insurers, reinsurers, and captives.

Relying on data

When it comes to the industry adopting data, Duffield said that while small steps have been taken, there is still a reluctance to go further.

“If we’re talking about exchanging and augmenting data there does seem to be this resistance and belief that the individual entities are somehow unique and do everything differently,” she said.

“It’s a belief that data cannot be exchanged between parties, but we have been exchanging small amounts of data for many years, particularly with the bureaus. The next step is exchanging detailed, structural parts of the data, but there’s been a human reluctance to get on with it.”

She added: “In any industry, there has been an increase in reliance on data and the ability to make decisions and to analyse data. That’s where the London wholesale brokers need to catch up or, at least, embrace a broader sense of the fact that data is an undervalued and intangible asset.”

Buckle offers a more nuanced take. He pointed to the London Market Group’s “London Matters” report from 2014 and the London Market Target Operating Model initiative that followed and was later replaced by the Future at Lloyd’s initiative. The former looked at how competitive the London insurance market was, while the latter looked to enhance business practices with a budget of £250 million over five years.

“If we look at the amount of money and effort that’s gone into these projects but then look at what’s been delivered, I’d say that the market has not received the value from these projects that we were expecting at the outset,” he said. “And, as a technology provider, we’re not seeing that easy integration.”

One of the key problems the industry has when it comes to data is that it is still in the mindset of starting with a document rather than thinking about how it can collect data from the first step, he added.

“That’s still the case in a lot of areas in the business today,” he said.


Given that much of the information collected will be along similar lines, no matter which insurance company it is, Buckle says that a standardisation of this data is still not the lowest bar that everyone works from.

“It shouldn’t be that hard to get the data standardisation correct. The point is that insurers and brokers are all participants in the chain. And we have a common language, from a technological perspective, that can all be done today,” he explained.

One of the chief criticisms of data is that it is bound by its own limitations. A limited data source will provide only a certain number of interpretations and projected outcomes.

“The next step is exchanging detailed, structural parts of the data, but there’s been a human reluctance to get on with it.”
Kirstin Duffield, Morning Data

Duffield does not dispute this, but said that this is a problem with a simple solution: multiple, linked sources of data that buttress and enrich one another.

“If you’re looking only at a single data source it will at any stage have its own limitations. But now we can cross-reference,” she explained.

“There are multiple data sources that you can augment to get a bigger picture of the entity you are interested in.”

There are also risks when a correlation in what is recorded is mistakenly put forward as a causation. As reported in Forbes in October: “The problem with fundamental data science approaches is that correlation doesn’t imply causation. In other words, just because an activity is an outlier doesn’t make it a potential threat.

“There could be many reasons for such outliers, including random chance, new applications, and workers filling in for one another. There may be a relationship between an unusual event and a threat, but the relationship may be spurious.”

It is a tricky proposition. How can the message about improving the quality of data be put across?

“No individual company is responsible for all the data,” said Buckle. “It’s a shared responsibility right through the value chain, potentially starting with the policyholder at the beginning. I believe we can make it easy for everyone involved.

“The more we use those enrichment services, the better we can trust the quality of the data flowing through.”
Paul Buckle, Morning Data

“We can enrich the data through the process chain. We have clients that can populate the data relating to a marine vessel simply by inputting an International Maritime Organization number. That’s no different from putting in your car number plate and having the DVLA tell you what make and model car you have. It’s the pools of data that allow us to enrich that data.

“In terms of quality, the more we use those enrichment services, the better we can trust the quality of the data flowing through. If each participant has that responsibility through the value chain, we can all benefit from that quality. It’s a shared responsibility, not just that of the insurer,” he concluded.

To view the full Re/insurance Lounge session click here

Main image: Shutterstock / nikkytok