Cyber ransoms: a moral and commercial issue
Cyber ransoms: a moral and commercial issue | Insurance Business Australia
Cyber
Cyber ransoms: a moral and commercial issue
From cyber ransoms to unchecked generative AI
The cyber insurance market continues to provide a range of challenges for insurers and brokers.
During his October visit to the United States, Prime Minister Anthony Albanese and Microsoft executives together announced that firm’s investment of five billion dollars in Australia over two years to build a cyber shield and expand the digital economy.
“We welcome Microsoft’s investment to increase the capacity of hyperscale cloud computing, in conjunction with the Australian Signals Directorate (ASD), to enhance cybersecurity defences within the country,” said Ben Robinson (pictured above), professional risk placement manager with Honan Insurance Group.
“We feel that anything that can be done to prevent and protect against cyber risk is positive,” said Robinson. “However, cyber insurance remains a critical part of the equation.”
One emerging issue discussed at the conference, he said, was the “train of thought that suggests investing in a cyber insurance policy is an admission of weakness or lack of confidence in a business’s ability to protect against cyberattacks.”
Robinson was keen to dispel that view but said the reality is that no cyber protection measures are perfect.
“There is always capacity for a breach, no matter how strong a business’s defences may be,” he said. “Organisations must also demonstrate an increasingly stringent minimum level of cyber hygiene to secure a policy in any case, so having a policy in place is a sign of due diligence and compliance rather than weakness.”
Cyber ransoms: should they be banned?
‘Never pay a ransom’ says the website of the federal government’s Australian Cyber Security Centre (ACSC).
The government is deciding whether paying ransoms should be banned under its 2023-2030 Australian Cyber Security Strategy.
Robinson doesn’t agree with a ban.
“While paying a ransom validates the business model ransomware generates for cyber criminals to then fund bad actors, the blanket banning of ransoms is more likely to harm Australian businesses and its citizens more than it will impact the cyber-criminal groups,” he said.
He said there is still a misconception that cyber criminals are “bedroom hackers.”
“In reality we are dealing with sophisticated organised crime syndicates run by highly skilled experts who have been honing their craft over many years,” said Robinson.
He said cyber criminals “are not stupid” and run tight business operations. He suggested that, despite being criminals, these businesses are actually similar in many ways to lawful firms with a focus on efficiencies and the best use of time and resources.
“They are adaptable and when one strategy starts to underperform, they pivot their approach to maximise ROI – just like any business c-Suite,” said Robinson.
Other industry stakeholders agree.
Robinson said any decision about paying a ransom should be evaluated on a case-by-case basis.
“There are both moral and commercial considerations which form part of that evaluation process,” he said.
The moral perspective, said Robinson, considers if paying the ransom enables more cyber-criminal behaviour.
“From a commercial perspective, does the benefit of paying the ransom, outweigh the potential reputational harm and business interruption expenses likely to be incurred by the business?” he said.
Robinson said in a recent case involving a client, paying the ransom was the more cost-effective solution.
“The cost to recreate and restore the lost data exceeded the ransom demand,” he said.
Generative AI risks
Another major challenge discussed at the Melbourne conference was the risk of unchecked generative AI.
“Generative AI tools, like ChatGPT, are highly prone to errors and misinformation,” said Robinson. “Organisations that provide advice to their customers, whether via their blog or direct consultation, cannot rely on the accuracy of the information generative AI provides.”
The use of these tools, he said, could impact a firm’s insurance coverages.
“While specialised professional indemnity or errors and omissions insurance exists to protect insured companies against claims from clients that allege financial losses due to incorrect advice,” said Robinson, “the use of AI in any professional capacity could incur specific policy conditions in the future, potentially impacting insurance covers.”
He said an example could be a lawyer asking ChatGPT to summarise a client’s annual financial reports.
“But if the summary is incorrect and the wrong advice is provided to the client, this puts the lawyer at breach of their duty of care towards their client,” said Robinson.
This results in the risk, he said, that the individual and their company could be sued. Robinson said one key concern is how tools like ChatGPT could facilitate erroneous advice on a very large scale.
“Generative AI is only in its infancy, and many of the policies surrounding its use are yet to be established,” he said. “However, we expect that laws surrounding legal and fair use of generative AI will slowly begin to emerge.”
Robinson cited numerous major companies, from Samsung to Amazon, that have banned the use of ChatGPT in the workplace.
“Given the significant risk of data privacy breach, the potential breach of cyber insurance and professional indemnity policies, as well as the potential risk of copyright infringement, organisations need to ask themselves — do the benefits of generative AI weigh up the potential risk?” he said.
Robinson believes that at this stage, there’s no right or wrong answer.
“It’s up to each individual organisation to decide what risks they’re willing to take,” he said.
What do you see as the biggest cyber risk right now? Please tell us below.
Related Stories
Keep up with the latest news and events
Join our mailing list, it’s free!