The Hunton Policyholder’s Guide to Artificial Intelligence: Artificial Intelligence-Related Securities Exposures Underscore the Importance of Thorough AI Insurance Audits

The Hunton Policyholder’s Guide to Artificial Intelligence: Artificial
Intelligence-Related Securities Exposures Underscore the Importance of
Thorough AI Insurance Audits

The Hunton Policyholder’s Guide to Artificial Intelligence: Artificial Intelligence-Related Securities Exposures Underscore the Importance of Thorough AI Insurance Audits

As we explained in our introductory post, rapid advancements in artificial intelligence (AI) present multifaceted risks for businesses of all types. The magnitude, fluidity and specificity of these risks underscore why businesses should continually audit their own unique AI risks profiles to best understand and respond to the hazards posed by AI.

A recent securities lawsuit in the U.S. District Court for the District of New Jersey against global engineering company Innodata, Inc. and its directors and officers underscores potentially unique exposure for public companies operating in the space as plaintiffs increasingly scrutinize the accuracy of AI-related disclosures, including those made in public filings. The Innodata lawsuit is proof that misstatements or over-statements about the use of AI can prove as damaging as misuse of the technology itself. More to the point, Innodata solidifies corporate management among those potentially at risk from the use or misuse of AI. Companies, therefore, should evaluate their directors and officers (D&O) and similar management liability insurance programs to ensure that they are prepared to respond to this new basis for liability and take steps to mitigate that risk.

Businesses Are Increasingly Making Public-Facing Disclosures Related to AI

The buzz of AI has become ubiquitous. The Innodata case illustrates how companies may be enticed to utilize AI in their branding, product labeling and advertising. But as with anything, statements about AI utilization must be accurate. Otherwise, misstatements can lead to a host of liabilities, including exposures for directors, officers and other corporate managers. This is nothing new, especially when it comes to public company disclosures to shareholders and the SEC.

See also  Park and taken for a ride – avoid a car park catastrophe!

While liability related to public-facing misstatements is not new, liability related to AI-specific misstatements is a comparatively newer phenomenon as companies increasingly make disclosures about their use of AI. One recent report noted that “over 40% of S&P 500 companies mentioned AI in their most recent annual” reports, which “continues an upward trend since 2018, when AI was mentioned only sporadically.” Companies making these disclosures included many household names and even insurance companies. Some public disclosures have focused on competitive and security risks, while others have highlighted the exact way businesses are using AI in their day-to-day operations.

Disclosures Raise the Prospect of Liability Under the Securities Laws

These disclosures, while increasingly common, are not risk-free. As SEC Chairman Gensler flagged in a December 2023 speech, a key risk is that businesses may mislead their investors about their true artificial intelligence capabilities. According to Gensler, the securities laws require full, fair and truthful disclosure in this context, so his advice for businesses risking misleading AI disclosures was simple—“don’t do it.”

Despite this admonition, a late February lawsuit—possibly the first AI-related securities lawsuit—alleges that a company did “do it.” In a February 2024 complaint, shareholders allege that Innodata, along with several of its directors and officers, made false and misleading statements related to the company’s use of artificial intelligence from May 9, 2019 to February 14, 2024. Innodata, the complaint alleges, did not have a viable AI technology and was investing poorly in AI-related research and development, which made certain statements about its use of AI false or misleading. Based on these allegations and others, the complaint alleges that the defendants violated Securities Exchange Act of 1934 Sections 10(b) and 20(a) and Rule 10b-5.

See also  Buick Wildcat and Electra concepts, Ford Maverick | Autoblog Podcast #732

Takeaways for Businesses Using AI

In many ways, Innodata presents just another manner of management liability. That is, while AI is at the heart of the Innodata case, the gravamen of the allegations are no different than other securities lawsuits alleging that a company has made a misstatement about any other technology, product or process.

On the other hand, the Innodata lawsuit illustrates the need for corporate directors, officers and managers to have a clear understanding of what types of AI its company is producing and using, both in its own operations and via mission-critical business partners. Innodata highlights why corporations cannot simply use “AI” as a means of enhancing their product of business without exhaustively understanding the corresponding risks and making accurate disclosures as necessary. Management and risk managers will need to continually reassess how their company is using AI given its rapid deployment and evolution.

In sum, as companies increasingly make disclosures about AI, they will not only want to consult securities professionals to make sure that their disclosures comply with all applicable laws, they would also be well-advised to consider their approach to AI risk management, including through reviews of their insurance programs as necessary. By considering their insurance coverage for AI-related securities scenarios like this one early and often, public companies can mitigate their exposure before it is too late. And as always, consultation with experience coverage counsel is an important tool in corporate toolkits as businesses work to ensure that their risk management programs are properly tailored to their unique business, risk tolerances and goals. 

See also  Distracted driving at an all-time high: How insurers can respond