Earnix Blog > AI

When It Comes to Synthetic Data in Specialty Markets, FOMO is Real

Aakash Shirodkar

June 26, 2024

  • AI
  • Automation
Data

Fear of missing out is anything but synthetic

For a market whose primary role is to assume the risks of others, reinsurers and the wider specialty market have been noticeably risk-adverse when it comes to the adoption and use of synthetic data.

The benefits of synthetic data are being realised by a wide range of companies and market sectors, and adapting a synthetic data model could also be used to deliver benefits to insurance underwriters. Specialty markets need to overcome their fear of missing out (FOMO) and embrace this new practice.

The market has a data dilemma

To make the dilemma real, consider the fact that Lloyd’s is 350 years old and has a huge amount of legacy data, much of which is handwritten. They could digitalise it all, but then it is a question of what they would do with it.

There are also huge amounts of newly-created digital data, growing by the second, which tests the specialty market’s ability to manage the data and put it to productive use.

What is clear is that it is a critical time for the market, and there will soon come a tipping point where the market will need to find a way to manage all that synthetic data and use it effectively.

AI will allow insurers to catch up on the data front

When ChatGPT was launched, it moved the ability to access and leverage AI away from the domain of the programmer, and put the power of AI in hands of the public. That was the tipping point for AI in general, and the speciality market is faced with a similar inflection point. Those who have failed to understand how AI can be used will be left behind.

AI will become a way in which full use of vast amounts of data can be achieved, and it is likely we will see a fully digital insurer who will use AI and synthetic data models to drive the business.

What is synthetic data?

Synthetic data is a concept which allows underwriters to mimic real-world data, data that can then be used to run in-depth risk assessments and models without the risks inherent in the data they have at their disposal. Those synthetic data models can significantly enhance the understanding and underwriting of risks.

Insurers can use synthetic data to overcome a shortage of data when they require greater levels of insight to understand the risks faced. The more data you have in any model or analytical exercise, the more accurate the result. Synthetic data can be used to increase the data available to make outcomes more specific.

The insurance sector faces strict regulatory and governance requirements for how it uses and treats its clients’ data. Personal data cannot be shared. Synthetic data can allow underwriters to mimic “real” data and use that to build models and enhance risk assessments.

The system can mimic the relationships found in “real” data, which allows real and synthetic data to be used side-by-side, and while it mimics the real-world data, it will not change the outcomes.

Synthetic data is anonymised and that means that underwriters are not accessing raw data on actual people, with the risk that it ends up in the wrong hands. If an insurer should suffer a cyber breach, the hackers will leave with synthetic data that is anonymised, rather than personalised and identifiable data on clients. This eliminates the risk of real data falling into the wrong hands, and removes compliance risk.

How can synthetic data benefit insurers?

The overall benefit of synthetic data is to enhance the identification and resolution of various sources of uncertainty in business decision-making. In other words, to improve predictability.

Synthetic data’s first use is to analyse risk. It can generate realistic risk scenarios and improve risk assessment accuracy.

The industry already runs loss models to evaluate the risk and exposure to major events, such as weather-related losses. Synthetic data allows underwriters to do so with huge amounts of data specifically created from real risk data, to enable a far larger data field on which it can make more accurate predictions.

At present the industry will often use historical events to build models for the future. Synthetic data has the capability to factor in impacts that have yet to occur, thereby making models more accurate. The key is the ability to put predictability at the heart of the process.

The second major benefit is around fraud. Synthetic data can be used to shift models from a reliance on past data to build the risk model, delivering far greater insight into claims and clients’ behaviour, which enhances the ability to identify “red flags” at the point of risk placement and claims.

Synthetic data can also support the use of responsible AI. There is always a bias in data, which can lead to a lack of diversity. For instance, if you have a data set which was 60% male to female, however you use the data, it will retain the bias. Synthetic data can allow the data to be balanced to the desired 50/50 split, and thereby eliminate the bias.

Implementing the use of synthetic data

The benefits are clear, but the market is still struggling to recognise and utilise the opportunity. All the necessary ingredients are in place, now specialty markets need to move to take full advantage of them.

In addition to the vast amounts of internal data insurers already have at their disposal, there is now a wide range of external suppliers of various kinds of data, and the speciality market needs to overcome its risk aversion and speed its adoption of AI and synthetic data.

But insurers need to have the necessary checks and balances in place to ensure the proper use of AI and synthetic data. Underwriters need to understand the appropriate safeguards if they are to realise the full benefit of the data.

Nothing, not even technology, comes without risks. Insurers need to keep these principles in mind as they move forward:

  • If it is not generated properly, the data will contain bias, and that bias will generate imperfect results.

  • There needs to be systems in place to ensure a requisite level of real-world context.

  • A proper framework must also be implemented to ensure the parameters are robust.

  • Finally, the data must be ethically used.

In summary

Yes, there are challenges, but the market needs to recognise the huge benefits that synthetic data can deliver.

The London market is often defined by its FOMO, but when individual markets utilise synthetic data models, it will likely signal a rush to adoption and implementation. Early adopters will remain ahead of their peers, and you should plan now to be one of those early adopters.

Further reading

For further reading about the how Earnix is helping to accelerate the transformation of the London Market, download the report Reshaping London’s Specialty Market - The Impact of Technology and Data.

For additional discussions on synthetic data, you can also check out these two blog posts:

The first covers the rise of synthetic data in general, the second provides detail as to how synthetic data is generated and how it’s put to use in practical insurance applications.

Compartir article:

Aakash Shirodkar

ROW Practice Head for Data and AI at Hexaware, an Earnix partner