Tackling the misuse of AI in insurance

Tackling the misuse of AI in insurance

Tackling the misuse of AI in insurance | Insurance Business America

Risk Management News

Tackling the misuse of AI in insurance

EY head on an issue the industry needs to get on top of

Risk Management News

By
Mia Wallace

“This year, we wanted to highlight the recurring theme of the global protection gap from a different angle – examining how the insurance industry can restore trust and deliver more societal value.”

Exploring some of the key themes of EY’s latest ‘Global Insurance Outlook’ report, Isabelle Santenac (pictured), global insurance leader at EY, emphasised the role that trust and transparency play in unlocking growth. It’s a link put firmly under the microscope in the annual report as it examined how the insurance market is being reshaped by multiple disruptive forces including the evolution of generative AI, changing customer behaviours and the blurring of industry lines amid the development of new product ecosystems.

Tackling the issue of AI misuse

Santenac noted that the interconnectivity between these themes is grounded in the need to restore trust, as this is at the centre of finding opportunities as well as challenges amid so much disruption. This is particularly relevant considering the drive of the industry to become more customer-focused and increase the loyalty of customers, she said, which requires customers having trust in your brand and what you do.

Zeroing in on the “exponential topic” that is artificial intelligence, she said she’s seeing a great deal of recognition across the industry of the opportunities and risks AI – and particularly generative AI – presents.

See also  What will not be covered by an auto insurance policy?

“One of the key risks is how to ensure you avoid the misuse of AI,” she said. “How do you ensure you’re using it in an ethical way and in a way that’s compliant with regulation, in particular with data privacy laws? How do you ensure you don’t have bias in the models you use? How do you ensure the data you’re using to feed your models is safe and correct? It’s a topic that’s creating a lot of challenges for the industry to tackle.”

Test cases or use cases? How insurance businesses are embracing AI

These challenges are not preventing companies from all over the insurance ecosystem working on ‘proof of concept’ models for internal processes, she said, but there’s still a strong hesitancy to move these to more client-facing interactions, given the risks involved. Looking at a survey recently carried out by EY on generative AI, she noted that real-life use cases are still very limited, not only in the insurance industry but also more broadly.

“Everyone is talking about it, everyone is looking at it and everyone is testing some proof of concept of it,” she said. “But no-one is really using it at scale yet which makes it difficult to predict how it will work and what risks it will bring. I think it will take a little bit of time before everyone can better understand and evaluate the potential risks because right now it’s really nascent. But it’s something that the insurance industry has to have on its radar regardless.”

Understanding the evolution of generative AI

Digging deeper into the evolution of generative AI, Santenac highlighted the pervasive nature of the technology and the impact it will inevitably have on the other pressing themes outlined by EY’s insurance outlook report for 2024. No current conversation about customer behaviours or brand equity can afford not to explore the potential for AI to impact a brand, she said, and to examine the negative connotations not utilising it correctly or ethically could bring.

See also  UK government to assist in war risk insurance project for Ukraine

“Then on the other hand, AI can help you access more data in order to better understand your customers,” she said. “It can help you better target what products you want to sell and which customers you should be selling them to. It can support you in getting better at customer segmentation which is absolutely critical if you want to serve your clients well. It can help inform who you should be partnering with and which ecosystems you should be part of to better access clients.”

It’s the pervasive nature of generative AI which is setting it apart from other ‘flash in the pan’ buzzwords such as Blockchain, the Internet of Things (IoT) and the Metaverse. Already AI is touching so many elements of the insurance proposition, she said, from a process perspective, from a selling perspective and from a data perspective. It’s becoming increasingly clear that it’s a trend that is going to last, not least because machine learning as a concept has already been around and in use for a long time.

What insurance companies need to be thinking about

“The difference is that generative AI is so much more powerful and opens up so many new territories, which I why I think it will last,” she said. “But we, as an industry, need to fully understand the risks that come from using it – bias, data privacy concerns, ethics concerns etc. These are critical risks but we also need to recognise, from an insurance industry perspective, how these can create risks for our customers.

“For me, this presents an emerging risk – how we can propose protection around misuse of AI, around breach of data privacy and all the things that will become more significant risks with the use of generative AI? That’s a concern which is just emerging, but the industry has to reflect on that in order to fully understand the risk. For instance, experts are projecting that generative AI will increase the risk of fraud and cyber risk. So, the question for the industry is – what protection can you offer to cover these new or increasing risks?”

See also  Sompo International creates chief underwriting officer post

Insurance companies must start thinking about these questions now, she said, or they run the risk of being left behind as further advancements unfold. This is especially relevant given that some litigation has already started around the use and misuse of AI, particularly in the US. The first thing for insurers to think about is the implications of their clients misusing AI and whether it’s implicitly or explicitly covered in their insurance policy. Insurers need to be very aware of what they are and are not covering their clients for, or else risk repeating what happened during the pandemic with the business interruption lawsuits and payouts.

“It’s important to already know whether your current policies cover potential misuse of AI,” she said. “And then if that’s the case, how do you want to address that? Should you ensure that your client has the right framework etc, to use AI? Or do you want to reduce the risk of this particular topic or potentially exclude the risk? I think that’s something the insurers have to think about quite quickly. And I know some are already thinking about it quite carefully.”

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!