Gilchrist Connell: Why AI could be a long term danger

Gilchrist Connell: Why AI could be a long term danger

Gilchrist Connell: Why AI could be a long term danger | Insurance Business Australia

Technology

Gilchrist Connell: Why AI could be a long term danger

A legal expert, ChatGPT and the government give their views on AI

Technology

By
Daniel Wood

“Technology disruptions” come top of a list of six items when you ask ChatGPT, the OpenAI chatbot, what it sees as 2024’s biggest insurance challenge.

“Advancements in technology, such as the widespread adoption of artificial intelligence [AI], blockchain, and big data analytics, can present challenges for the insurance industry,” said ChatGPT. “Adapting to these technologies while ensuring data security and privacy could be a significant challenge.”

The other challenges, it said, range from cybersecurity concerns to climate change and natural disasters.

It’s probably an ironic conflict of interest that ChatGPT sees AI advancements – like itself – as responsible for likely the most serious 2024 industry challenge.

As a disclaimer, the chatty chatbot did add that its last “knowledge update” on this topic was two years ago.

However, many industry experts would probably agree with the current veracity of the algorithm’s list. Including Sydney-based Alex Haslam (pictured above), from insurance-focused national law firm, Gilchrist Connell.

“As a firm and for our insurer and insured clients, the use of AI has posed a significant challenge,” said Haslam, one of the firm’s principals and with an expertise in insurance, construction and commercial litigation.

He said AI and chatbots can be helpful for legal research, summarising dense expert reports, comparing documents and generating content. One of their strengths, said Haslam, is their speed, which “cannot be matched by humans.”

See also  Legal pro joins Sparke Helmore as partner

“However, there are potential issues with data privacy and security and there is no real control over its accuracy; there have been plenty of horror stories as to work produced by an AI ‘hallucination’,” he said.

Haslam said, to some extent, these risks can be negated by close supervision and a detailed settling process.

Why AI could be a long term danger

However, Haslam warned that this close supervision is dependent on experienced supervisors with “lived experience” from their “hard yards” as an industry junior.

“So there remains a concern that the short-term benefits of AI – and there are certainly benefits in increasing efficiency and reducing costs – will provide a long-term disadvantage to our future,” he said.

Government’s AI discussion paper could help

In June, the government released papers concerning the safeguards that should be in place for AI technologies.

The Safe and Responsible AI in Australia discussion paper focuses on the regulatory framework around AI both in Australia and overseas. The paper aims to identify possible gaps and propose ways to strengthen the framework.

Haslam said he is looking forward to what he expects to be “enlightening” outputs from this consultation process.

In fact, just days ago and soon after Haslam answered these questions from Insurance Business, the government came out with its interim response to this consultation.


current regulations do not sufficiently address AI risks
existing laws do not adequately prevent AI-facilitated harms before they occur and more work is needed to ensure an adequate response to harms after they occur
the speed and scale that defines AI systems uniquely exacerbates harms and in some instances makes them irreversible, such that an AI-specific response may be needed
consideration needs to be given to introducing mandatory obligations on those who develop or use AI systems that present a high risk, to ensure their AI systems are safe

See also  Lloyd's to partner on establishing world’s first space sustainability kitemark

AI’s insurance challenges

Haslam also warned that existing professional indemnity insurance coverages may not cover AI.

“An additional risk that faces our professional clients is the possibility that their professional indemnity insurance might not cover claims arising out of the use of AI,” he said. “On the other hand, our insurer clients have been faced with claims made on some older wordings that have not been drafted with AI use in mind, even though this is the landscape of today.”

Advice for clients

Haslam said his firm has advised clients to have “open discussions with their brokers and insurers” concerning their use of AI. However, he said clients must also prioritise educating themselves.

“It is imperative that they demonstrate an awareness of the relevant data protection legislation and requirements and they put in place a strong corporate policy as to AI use,” he said.

Haslam said this corporate policy should cover adequate security measures, including around supervision.

“In many instances, AI/chatbots are used in place of junior, human staff members and the real issue is not what it produces, but as to supervision,” he said. “Obviously, this differs if the use of AI is intentionally more pervasive.”

Haslam said that during 2023 Gilchrist Connell developed “a robust corporate policy on AI use” focused on risk elimination.

Advice for insurers

He said his firm has advised insurer clients that the “salient issues for cover” are protective measures the insured has in place, rather than the use of AI unless the AI is used in the “intentionally more pervasive” ways.

See also  'We can be the alternative to corporate giants': McLardy McShane

Haslam said there are insurance products expressly written to cover claims or loss from AI use and AI error.

How do you see the insurance risks posed by AI? Please tell us below.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!