State regulators push insurers to be responsible when using AI

State regulators push insurers to be responsible when using AI

State regulators push insurers to be responsible when using AI | Insurance Business America

Technology

State regulators push insurers to be responsible when using AI

An NAIC bulletin sets expectations but avoids prescriptions

Technology

By
Mark Schoeff Jr.

As more insurers turn to artificial intelligence to crunch data, process claims and determine whom to cover, state regulators are pushing them to avoid harming customers.

Regulators are concerned about the accuracy of the results produced by AI systems and predictive analytics and whether they introduce unfair discrimination. For instance, the AI system may inappropriately deny medical claims or identify property and casualty fraud risks in a racially biased way.

Insurance itself is based on assessing risk. But utilizing AI to aid that process presents its own risks, said Kathleen Birrane, Maryland insurance commissioner.

“When you have very, very, very complex models that are finding correlations that seem to be predictive of risk and loss, are they also predictive of something else and, therefore, do you wind up…disadvantaging a whole category of people, particularly people in protected categories because there is an unexplored, unexplained, unknown, unrealized correlation behind the scenes,”  said Birrane, chair of the NAIC Innovation, Cybersecurity and Technology Committee.

AI is “unique as a methodology because it introduces a level of opaqueness that doesn’t exist with other methods,” Birrane added.

The NAIC model bulletin provides guidance to insurers on what regulators are seeking when it comes to internal controls for governance and risk management related to using their own AI systems or relying on third-party AI.

See also  LexisNexis Risk Solutions automates No Claims Discount proof for Northern Ireland broker

“The general proposition is that you are responsible for the outcome,” Biranne said. “We want to make sure that you are vigilant about that. The objective is to mitigate the risk that the [AI] deployment will result in those adverse outcomes.”

An urgency for AI guardrails

Only one state – Alaska – has issued the NAIC AI bulletin, while nine other states have put the bulletin out for public comment. Biranne anticipates more activity soon.

“We’ll see an uptick in the spring after [state] legislative sessions slow down,” she said.

Given that insurers – including Cigna and UnitedHealth — are facing AI-related litigation, firms shouldn’t wait for their states to issue the AI bulletin before putting internal controls in place, said Dan Adamson, co-founder of Armilla AI, an AI risk consulting firm. 

“All carriers should take this very seriously immediately,” Adamson said. “We’re already seeing lawsuits on the irresponsible use of AI. [The NAIC bulletin] is sending a strong signal to insurers, whether they’re building AI solutions in-house or using a vendor’s solution, they have to do it in a responsible way.”

The NAIC bulletin doesn’t prescribe specifics on AI guardrails. If a state decides to issue the bulletin, it does not carry the force of law. It provides guidance on how regulators interpret existing laws as applied to AI.

Many insurance firms, particularly large ones, already have AI protocols in place because they’re worried about satisfying their boards and avoiding litigation, Biranne said.

See also  What will ICIB BROKERWEB look like?

“We have a pretty good sense that these are all actionable, doable guidelines…and that many companies are already here or beyond,” Biranne said.

Industry needs clear regulatory expectations

AI is “revolutionizing the insurance landscape” when it comes to underwriting, risk assessment, claims processing and customer service, said Emma Ye, vice president of risk at At Bay, a cyber insurance specialist. It’s good to know regulators’ mindset.

“We as an industry need clear expectations for regulatory compliance, so everyone will follow the same standard, and, hopefully, it’s a high standard,” Ye said. “We understand the critical importance of ensuring that AI-driven decisions are transparent, accurate and fair from discriminatory practices.”

Sezaneh Seymour, vice president and head of regulatory risk and policy at Coalition Inc., a cybersecurity insurance specialist and risk consultant, said state regulators crystalize that AI programs must have governance and risk-management controls.

“It lays out those expectations clearly, and that is very helpful,” she said.

She also likes the non-prescriptive nature of the bulletin, saying there are already many laws and regulations in place for responsible business conduct that also apply to AI. For instance, discrimination using AI is still discrimination.

“There’s no need to re-create the wheel,” Seymour said. “We have a number of measures in place to safeguard the use of AI.”

Humans make key decisions

One practice her firm follows is to allow AI to slice and dice data but not make the final assessment about coverage.

“All actual decisions we leave to humans,” Seymour said.

The humans who write insurance policies and determine whose claims get paid will now draw upon AI, as well as traditional questionnaires and third-party data sources, Ye said.

See also  Lloyd’s underwriting room reopens

“The question is going to be how do you trust these different sources to get an accurate view of risk,” Ye said.

By making AI a priority, insurers know that state regulators will be watching.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!