Cybercriminal abuse of generative AI on the rise: report

Cybercriminal abuse of generative AI on the rise: report

Cybercriminal abuse of generative AI on the rise: report | Insurance Business Australia

Cyber

Cybercriminal abuse of generative AI on the rise: report

Use of GenAI in cybercrime “developing at a blazing pace”

Cyber

By
Noel Sales Barcelona

TrendMicro has sounded the alarm over the rising pace – and breadth – of the abuse of generative artificial intelligence (GenAI) in cybercrimes globally.

In its updated report released last July 30, TrendMicro stated that new developments have been recorded only a few weeks since their first report about the cybercrime use of GenAI, and the rise of crimes using GenAI is “developing at a blazing pace.”

“We keep seeing a constant flow of criminal LLM offerings for both pre-existing and brand-new ones. Telegram criminal marketplaces are advertising new ChatGPT look-alike chatbots that promise to give unfiltered responses to all sorts of malicious questions. Even though it is often not mentioned explicitly, we believe many of them to be ‘jailbreak-as-a-service’ frontends,” said David Sancho and Vincenzo Ciancaglini, who wrote the report.

The report noted that commercial large language models (LLMs) such as Gemini or ChatGPT are programmed to refuse to answer questions or requests perceived as “malicious” or “unethical.” The new report also cited that there were several “criminal” LLM offerings similar to ChatGPT now proliferating on the internet that have unrestricted criminal capabilities.

“Criminals offer chatbots with guaranteed privacy and anonymity. These bots are also specifically trained on malicious data. This includes malicious source code, methods, techniques, and other criminal strategies,” the researchers said. “The need for such capabilities stems from the fact that commercial LLMs are predisposed to refuse obeying a request if it is deemed malicious. On top of that, criminals are generally wary of directly accessing services like ChatGPT for fear of being tracked and exposed.”

See also  Why do insurance agents quit?

The researchers also found that old criminal LLMs believed to be long gone like WormGPT and DarkBERT have resurfaced recently.

In the case of WormGPT, which started as a Portuguese programmer’s project and was announced to have ceased in August 2023, is now being sold in cybercriminal shops with different versions. Meanwhile, DarkBERT, a “criminal” chatbot is said to be a part of a larger offering by a threat actor going by the username CanadianKingpin12, and is now being sold underground, alongside four different criminal LLMs with very similar capabilities, the report noted.

“What’s particularly interesting for the recent versions of DarkBERT and WormGPT is that both are currently offered as apps with an option for a ‘Siri kit.’ This, we assume, is a voice-enabled option, similar to how Humane pins or Rabbit R1 devices were designed to interact with AI bots. This also coincides with announcements at the 2024 Apple Worldwide Developers Conference (WWDC) for the same feature in an upcoming iPhone update. If confirmed, this would be the first time that these criminals surpassed actual companies in innovation,” the researchers said.

The report also revealed that new criminal LLMs are proliferating on the internet such as DarkGemini and TorGPT but their functionalities don’t seem to veer away too much from the rest of the criminal LLM offerings, except for their ability to process pictures.

The researchers also warned that deepfakes are targeting ordinary citizens as the technology becomes more accessible.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!