Four things you need to know about ChatGPT

Four things you need to know about ChatGPT

Introducing a new wave of hackers

With ChatGPT being very new to the market, it is imperative to pinpoint exactly what types of threats can impact a business in order to safeguard its digital assets and information. Farley has noticed that since ChatGPT has made its way into the mainstream, that there have been “dark web forums popping up that are detailing how the technology can be used by junior hackers to learn how to develop malware and phishing email campaigns and deploy them.”

“This could potential grow the number of novel attackers into a large army in a short amount of time,” Farley said. It will give individuals without a sophisticated background in hacking to become adept at creating rogue software that will have the potential to wreak havoc on businesses, presenting claims for insurers that could be avoided or lessen their impact.

“As hackers exploit the technology or regulators clamp down on the various usages of it, companies will need to be more proactive in staying ahead of the curve with protective measures.”

Dealing with data hygiene

When using AI-enabled chatbot services, a business must be aware of the kinds of information that is input into them and whether that material is classified or not. From a risk management standpoint, data hygiene must be top priority to “mitigate any chance of intellectual property or trade secrets being pushed out into the public,” Farley said.

To sidestep this potentially disastrous outcome, “very thoughtful processes need to be deployed about what data should go in,” Farley said. Technology is only as good as the data that is put into them, so the information should be thoroughly vetted, while allocation of this job should be entrusted to a select few — doing so would also thwart any instances of misinformation being spread, which can be damaging to a company’s reputation.

See also  How much does it cost to title and register a car in Virginia?

A company should also be concerned with data bias and should abridge the legal, IT and marketing teams to negate this from occurring.

In tandem with internal AI concerns, there should also be a plan put in place to address external regulators at a state, federal and international level for compliance reasons.

“At some point, when dealing with business in cyberspace, at some point you will have a regulator who is going to have something to say about the controls surrounding ChatGPT,” Farley said. Since the service is in its infancy, these parameters have not yet been established, which leaves a lot of variables and vague decision making in the meantime.

Risk managers can write specific AI usage policies that will solidify this highly selective task to ensure that classified data does not get into the wrong hands.

ChatGPT is coming for my firms marketing department. Some pretty solid #insurance and #riskmanagement advice pic.twitter.com/ZbIm80usY3


— Bradley Dlatt (@bdlatt) March 25, 2023

Leveraging cyber insurance

Organisations and brokers should be mindful of the rapidly changing cyber insurance products that could potentially impact the scope of coverage. In 2023, the market is constantly in flux, spurring insurers to adopt new methods to mitigate cascading losses for regulatory risk, especially around the use of novel technologies.

Sub-limits and coinsurance are imposed for certain types of cyber losses, while some carriers have even modified policy language to restrict or exclude coverage for certain incidents that may result in litigation, regulatory investigations and fines.

“Artificial intelligence will significantly deepen our digital footprint, while potentially raising a cyber risk profile almost in lockstep,” Farley said.

See also  What is a business insurance policy?

“It is the industry’s responsibility to stay on top of any developments and implement or modify policies as a result. Things are going to change, but its beneficial to enact some cautious optimism at how ChatGPT will revolutionise things.”