AI’s “number one thing” and helping with “bottlenecks”
AI’s “number one thing” and helping with “bottlenecks” | Insurance Business Australia
Technology
AI’s “number one thing” and helping with “bottlenecks”
CEO on how every conversation she has now involves artificial intelligence
Technology
By
Daniel Wood
Governments and businesses around the world, including insurance firms, are coming to grips with artificial intelligence (AI). In Australia, the government has published AI Ethics Principles as an “interim response” to the challenges of this technology. The principles aim to make the technology safe, secure and reliable while further regulation is considered.
AI was one focus area of the recent Women in Insurance Summit Australia in Sydney. A panel explored successful AI use cases across insurance disciplines and also discussed ethical issues.
“Every meeting I’m having, there’s some conversation around AI,” said panellist Simone Dossetor (pictured above), CEO of Insurtech Australia.
AI for faster information
The leader of the industry’s peak body for tech-focused insurance firms said more than 50% of her members are using open source ChatGPT in some way. Nearly 70% are creating their own custom tools.
One key area for these firms is how to drive efficiencies.
“A lot of them are around, how can you use these models to get the data, the information you need faster,” said Dossetor.
Some of these AI initiatives include comparison tools for brokers, others can even create digital code.
AI helping with insurance “bottlenecks”
Tim Johnson was also on the panel. Johnson is head of automation for the giant insurer Suncorp.
He noted that, for decades, actuaries have used some form of “old” AI for their pricing.
“I think the real change for me – where we’d always struggled as insurers, brokers and underwriting agencies – is that a lot of what we do is wrapped up in documents,” said Johnson. “It’s either a PDS or it’s terms, or it’s emails back and forth.”
The arrival of language modelling and the public release of ChatGPT was a big step because it can help the industry, he said, where it does much of its work.
For example, in claims processing.
Johnson gave the example of a property insurance claim that requires a claims handler to bring together a range of complex information including building reports and file notes.
“It’s bringing all of that together on a single page,” he said.
This use of AI is allowing a claims hander to answer complex questions from an insured while they are on the phone.
Johnson said this is one of Suncorp’s current AI focus areas.
“What is it that we can do where language understanding and interpretation is the bottleneck?” he said.
The question for insurance firms to consider, he suggested, is whether there are already AI tools available that can deal with the particular insurance problem they want to solve. Or does the firm need to make its own bespoke AI product?
AI’s “number one thing”
Florence La Carbona articulated a starting point for firms starting to look at their AI options.
“The number one thing is knowing which problem we’re trying to solve,” she said.
She underlined the importance of using quality data for training AI models. La Carbona said data needs to be accurate, in big enough volumes, be a complete data set and also diverse and not biased.
“Once you have a good, beautiful data set ready for your model, you can begin to unlock the value of data,” she said.
However, this is where some major ethical concerns can begin.
More stakeholders than before
Jehan Mata, partner with Sparke Helmore Lawyers, said the ability of AI technologies to continue evolving and also enmesh many different stakeholders makes it challenging from a legal and insurance perspective.
“So the stakeholder isn’t just limited to an insurer who’s implementing a platform, it is also the IT company who’s putting up the platform, and that’s where I see a gap in data sharing,” said Mata.
She said the actual processes that would make AI and the information it gathers, transparent, for example, already exists. However, AI is an area where these processes have not been fully tested.
Some of the riskier areas that should concern legal and insurance professionals she said, include the quality of AI data sets and whether AI programs might discriminate against people and make erroneous decisions.
“That’s the part that’s untested and I think that is a real concern,” said Mata.
The chat bot that knew too much
She gave an example. During a technology trial, a chat bot was asked to provide all of the information from its previous conversations. The chat bot obliged.
“The way it was way too helpful caused a concern,” said Mata.
The panellists agreed that insurance firms need to consider the way AI creates additional risks that go beyond standard products.
“As you start to adopt solutions that are using more and more generative AI, it’s the third party risk and fourth party risk [that needs considering],” said Johnson.
He gave the hypothetical example of an insurer’s software providers passing sensitive data between them outside of the insurer’s own secure cloud system.
“I think that’s probably the thing that is on our minds mostly,” he said.
Amber Auld, head of insurance for Microsoft Australia, moderated the panel.
Did you attend the Women’s Summit last week? What did you think of it? Please tell us below
Related Stories
Keep up with the latest news and events
Join our mailing list, it’s free!