Caught in an AI arms race

Caught in an AI arms race

Caught in an AI arms race | Insurance Business America

Risk Management News

Caught in an AI arms race

Two industry experts on a “double-edged sword” and what risk managers should be most aware of

Risk Management News

By
Kenneth Araullo

While the dawn of generative AI has been hailed as a breakthrough across major industries, it is not a secret that the benefits it brought also opened new avenues of threat, the likes of which most of us have never seen before. A recent cybersecurity report revealed that as many as eight in 10 believe that generative AI will play a more significant role in future cyber attacks, with four in 10 also expecting there to be a notable increase in these kinds of attacks over the next five years.

With battle lines already drawn – one side utilising AI to bolster businesses while one does its best to breach and dabble in criminal activities – it is up to risk managers to see to it that their businesses do not fall behind in this AI arms race. In conversation with Insurance Business’ Corporate Risk channel, two industry experts – MSIG Asia’s Andrew Taylor and Coalition’s Leeann Nicolo – offered their thoughts on this new landscape, as well as what the future could look like as AI becomes a more prevalent fixture in all aspects of businesses.

“We see attackers’ sophistication levels, and they are just savvier than ever. We have seen that,” Nicolo said. “However, let me caveat this by saying there’ll be no way for us to prove with 100% certainty that AI is behind the changes that we see. That said, we’re pretty confident that what we’re seeing is a result of AI.”

Nicolo pegged it down to a few things, the most common of which is better overall communication. Just a couple of years ago, she said that threat actors did not speak English very well, the production of client exfiltrated data was not very clear, and most of them did not really understand what kind of leverage they have.

See also  AXA UK says customers should act quickly following BT Redcare closure announcement

“Now, we have threat actors communicating extremely clearly, very effectively,” Nicolo said. “Oftentimes, they produce the legal obligation that the client may face, which, in the time that they’re taking the data, and the time it would take them to read it and ingest and understand the obligations, it’s as clear as it can be that there is some tool that they’re using to ingest and spit that information out.”

“So, yes, we think AI is definitely being used to ingest and threaten the client, especially on the legal side of things. With that being said, before that even happens, we think AI is being utilised in many cases to create phishing emails. Phishing emails have gotten better; the spam is certainly much better now, with the ability to generate individualised campaigns with better prose and specifically targeted towards companies. We’ve seen some phishing emails that my team just looks at, and without doing any analysis, they don’t even look like phishing emails,” she said.

On Taylor’s part, AI is one of those trends that will continue to rise in standing in terms of future perils or risks in the cyber sector. While 5G and telecommunications, as well as quantum computing down the road, are also things to watch out for, AI’s ability to enable the faster delivery of malware makes it a serious threat to cybersecurity.

“We’ve got to also realize that by using AI as a defensive mechanism, we get this trade-off,” Taylor said. “Not exactly a negative, but a double-edged sword. There are good guys using it to defend and defeat those mechanisms. I do think AI is something that businesses around the region need to be aware of as one for potentially making it easier or more automated for attackers to plant their malware, or craft a phishing email to trick us into clicking a malicious link. But equally, on the defensive side, there are companies using AI to help better protect which emails are malicious to help better stop that malware getting through system.”

See also  Is GEICO owned by Progressive?

“Unfortunately, AI is not just a tool for good, with the criminals able to use it as a tool to make themselves wealthier at businesses’ expense. However, here is where the cyber industry and cyber insurance plays that role of helping them manage that cost when they are susceptible to some of these attacks,” he said.

AI still worth exploring, despite the dangers it presents

Much like Pandora’s Box, AI’s release to the masses and its increasing levels of adoption cannot be undone – whatever good or bad it may bring. Both experts have agreed with this sentiment, with Taylor pointing out that stopping now would mean terrible consequences, as threat actors will continue to use the technology as they please.

“The truth is, we can’t escape from the fact that AI has been released to the world. It’s being used today. If we’re not learning and understanding how we can use it to our advantage, I think we’re probably falling behind. Should we keep looking at it? For me, I think we have to. We cannot just hide ourselves away, as we’re in this digital age, and forget this new technology. We have to use it as best we can and learn how to use this effectively,” Taylor said.

“I know there’s some debate worried about the ethics around AI, but we have to realize that these models have inherent biases because of the databases that they’ve been built on. We’re all still trying to understand what these biases – or hallucinations, I think they’re called – where they come from, what they do,” he said.

See also  How much is car insurance per month in Michigan?

In her role as an incident response lead, Nicolo says that AI is incredibly helpful in spotting anomalous behaviour and attack patterns for clients to utilise. However, she does admit that the industry’s tech is “not there yet,” and there is still a lot of room for aggressive AI expansion to better defend global networks from cyberattacks.

In the next few months – maybe years – I think it’s going to make sense to invest more in the technology,” Nicolo said. “There’s AI, and you have humans double checking. I don’t think it’s ever going to be in a position, at least in the near term, to set and forget, I think it’s going to become more of a supplemental tool that demands attention, rather than just walking away and forgetting it’s there. Kind of like the self-driving cars, right? We have them and we love them, but you still need to be aware.”

“So, I think it’s going to be the same thing with AI cyber tools. We can utilise them, put them in our arsenal, but we still need to do our due diligence, make sure we’re researching what tools that we have and understanding what the tools do and making sure they’re working correctly,” she said.

What are your thoughts on this story? Please feel free to share your comments below.

Keep up with the latest news and events

Join our mailing list, it’s free!