Generative AI can be helpful, but it presents risk

Insurance is all about risk mitigation. Fortunately, this means that insurers are in a good position to analyze and address how artificial intelligence (AI) is being used within the industry. But the risks and returns of adopting generative AI (Gen AI) and the possibilities of large language models (LLMs) in life insurance are more difficult to evaluate than in a classic risk vs. return equation. 

Generative AI is a useful particular kind of AI that can be trained on existing work (i.e., the model’s training data and/or examples on the internet of such work) to brainstorm, code, illustrate, write, and perform nearly any task. The results are impressive—but unreliable. Gen AI complements, but doesn’t replace, other forms of AI, analytics and automation.  

In life insurance, Gen AI offers helpful applications across business areas: billing, claims, customer service, marketing, sales, underwriting, and more. Gen AI can drive operational efficiency, speed claims processing, improve the accuracy of risk assessments, deliver personalized marketing, aid fraud detection and prevention, and help create innovative products.

But Gen AI also presents weaknesses, threats, and risks. Internal risks (such as incomplete or inaccurate data used to train the models) and external risks (such as rogue models, which are unregulated, uncontrolled, or potentially harmful)—paired with relatively high computational costs land a lack of creativity, common sense, and ethics—are among the issues that any financial institution planning to use Gen AI must consider and address. 

Generative AI can certainly deliver positive outcomes: improved customer and employee experiences, operational improvements, security advancements, and faster, smarter innovation. But insurers must also take measures to prevent negative impacts from a range of potential risks,  including negative regulatory/legal, reputational, operational, security, and financial performance impacts. As detailed in the report Generative AI: Mitigating Risk to Realize Success in Life Insurance, insurance companies must understand the risks of building generative AI capabilities, while guarding against potential adverse outcomes and external threats. 

See also  From the Archive: 1984 Isuzu Trooper II Tested

Adverse outcomes

Data can contain human biases. Because Gen AI models are trained on large, pre-existing bodies of data, the resulting output can amplify the biases that existed in the training data and drive unethical behavior. Examples include unfair or discriminatory decisions in the underwriting process; amplification of unsubstantiated association of particular demographic groups with higher default risk; or even theft of insurers’ extensive, sensitive private personal information . These could result in reputational harm for an insurer, along with regulatory violations. An AI ethics policy, employee training, and testing for biased output are among the steps insurers can take to guard against Gen AI bias.

Another potentially harmful outcome of Gen AI is false output. Due to vague prompts or lack of context, “hallucinations” may occur. These are instances of Gen AI models generating text that is inaccurate, misleading, or fictional, while presenting it as being meaningful and coherent. In insurance, incomplete or inaccurate data may lead a Gen AI model to hallucinate while producing risk assessments. If incorrect guidance is produced about claim eligibility, for example, it may result in complications including wrongful claim denials. Insurance companies should be testing for inaccuracies and undertaking instruction tuning, a process using human feedback to fine-tune LLMs.

External threats

Responsible utilization of Gen AI calls for life insurers to pursue innovation while protecting themselves from external threats and malicious uses of Gen AI, notably:

Nefarious actors: If used with ill intentions, Gen AI may be manipulated by nefarious actors to harm insurers in a variety of ways. These include using voice and image cloning for phishing, leading to security breaches; creation of sophisticated malware; creating deepfakes as realistic forgeries to spread false information or to impersonate employees; and disinformation campaigns to spread false news and manipulate insurance markets. To counter the risks posed by nefarious actors, insurers can take measures such as investing in advanced detection technologies, implementing protocols to identify and manage phishing threats, use AI-driven cybersecurity tools for advanced detection of malware and disinformation, and strict verification processes for claim approval.

See also  Do Not Let Your Preventive Care Fall Through the Cracks

Regulatory violations: Globally, the speed and specificity of regulatory responses to AI, overall, and Gen AI, specifically, vary. In the US alone, as of this writing, 25 states have introduced legislation to cover AI and Gen AI; in July, the National Association of Insurance Commissioners (NAIC) issued a bulletin about insurers’ use of AI. Rapid as these developments are, insurers must stay aware of and in compliance with Gen AI regulations at the national and international levels in order to guard against regulatory violations. Currently, the most significant regulatory threat that Gen AI presents is breaching existing regulations, particularly data privacy regulations. While public LLMs are built with guards against the ingestion of personally identifiable information (PII), internal Gen AI models may not have these controls. If PII is used in training materials without being masked, for example, the firm using it may be exposed to significant financial penalties, such 4% of a firm’s annual revenue for GDPR violations. Countermeasures against the impact of potential regulatory violations include masking PII in materials used to train LLMs and implementing controls on Gen AI models, along with employee training and awareness regarding AI regulation and LLMs’ potential impact on privacy

Intellectual property (IP) violations: If an insurance company uses Gen AI models to generate content that infringes on IP, the insurer may be subjected to legal consequences (e.g., lawsuits brought by copyright holders), financial impacts (e.g., legal fees and damages from lawsuits), and reputational damage. Scenarios for insurers include a Gen AI model training on unlicensed or copyrighted work; leaking trade secrets if proper safeguards aren’t in place; and creating unauthorized derivative works, due in part to different countries’ varying legal interpretations of fair use. To counter the IP risks presented by LLMs, insurers can implement rigorous processes to make sure that all data used in training models is licensed properly, develop AI systems that recognize and respect IP rights, and include clauses in insurance policies that note the IP risks associated with use of AI technologies.

See also  Junkyard Gem: 1976 Chrysler Cordoba

Today’s efforts, tomorrow’s returns

Making the most of the potential transformational impact of Gen AI requires extensive collaboration, internally and with service providers and implementation partners. Insurance industry leaders must proactively develop risk mitigation strategies in order to optimize the use of Gen AI for insurers and their customers, alike