AI regulation faces test on life insurance in Colorado

AI regulation faces test on life insurance in Colorado

(Bloomberg) –The life insurance industry could become one of the first sectors to be subjected to strict rules around the use of algorithms and models powered by artificial intelligence as regulators seek to protect consumers from discrimination and bias.

The Colorado Division of Insurance proposed new rules last month that would require state-licensed life insurers to provide an inventory of all AI models they use, establish strict governing principles on how they deploy algorithms, and submit to significant oversight and reporting demands. 

“Our core goal is to make sure people aren’t being discriminated against,” said Colorado Insurance Commissioner Michael Conway. “To get there, we need to help insurers build out their own muscle memory of how they’re going to use big data in general moving forward.”

The meteoric rise of generative AI tools like OpenAI’s ChatGPT have lit a fire under regulators already struggling to draft rules around how companies can use big data in everything from employee hiring to rental applications. There is momentum in Europe and the US to address issues of algorithmic discrimination and develop safeguards around AI tools on the federal level. But experts say the most granular rules for AI — and the ones that will have important effects for companies — will happen on the state level, industry by industry.

“This is the first comprehensive regulatory rule for AI governance, not just for insurance, but I think in general,” said Avi Gesser, co-chair of the data strategy and security program at the law firm Debevoise & Plimpton. “ChatGPT caused some regulators to move more quickly on AI issues and ask who has been thoughtful about this. Looking around, it wouldn’t surprise me if they looked to Colorado.”

See also  Caterham Project V concept is an electric coupe that breaks with tradition

Life insurance companies have utilized automated tools in their underwriting programs for years. During the Covid-19 pandemic, when customers were largely unable to get in-person medical exams, insurers became interested in how they could use customer data — from credit card transactions to court records — to price and sell policies.

By 2021, 91% of life insurers had fully or partially-automated underwriting programs in place, up from 62% in 2019, said Catherine Theroux, a spokesperson for Limra, an insurance industry research firm. Life insurers surveyed by McKinsey & Co. in 2020 saw a 14% median increase in sales volume within two years of digitizing certain aspects of their services, including underwriting.

The Colorado regulation, which is scheduled to be adopted later this year, was mandated by a 2021 state law to protect consumers from unfair discrimination in insurance policies based on race, religion, gender and other social categories. In public stakeholder meetings on the regulation, insurance companies have pushed back against a policy they describe as overly burdensome, and one that will bring little benefit to those who need or want more coverage.

More than 100 million American adults acknowledge they have a life insurance coverage gap, according to Limra, but individual policy sales are waning from pandemic highs as soaring inflation eats into household budgets.

“The current proposal would be a drag on the use of technologies that would otherwise provide people the opportunity to access coverage,” Brian Bayerle, senior actuary at the American Council of Life Insurers, a industry lobbying group, wrote in an email. “This will likely cool the ability of insurers to innovate on their own or with the contribution of third parties.”

See also  The Lucid Air Grand Touring Makes the Tesla Model S Feel Kinda Pointless

Those who study the insurance industry say the regulation is an important step toward accountability for private companies with access to highly sensitive data, including online health records.

“Especially after Enron, determining who is accountable when something goes wrong — getting someone to put down their name to certify models or outputs — becomes critical,” said Sophia Duffy, a professor of business planning at the American College of Financial Services in Pennsylvania who has studied AI-enabled underwriting in life insurance.

By law, insurance companies can’t use data on race, gender, ethnicity and other variables to sell policies. But regulators are concerned that algorithms — powered by vast troves of electronic customer data — are also trained on data that could act as proxies for social groups. For example, credit card data showing that a policy applicant buys cigarettes every day is valuable information that a company can use to price a policy. But the location where those cigarettes were purchased — urban or rural, for example — can in effect stand in for race or ethnicity when fed into an algorithm, and potentially result in unfair bias towards that applicant.

Experts say the Colorado regulation leaves an important gap in defining what exactly constitutes discrimination. That’s difficult to determine when insurance companies don’t collect data around the race, ethnicity or gender of their policy applicants.

“The most difficult challenge is around standard setting for unfair discrimination,” said Azish Filabi, a professor of business ethics at the American College of Financial Services who has co-authored papers with Duffy on the role of AI in insurance underwriting. “You might have a good testing process [for bias], but what are you testing for?”

See also  It’s A Crows World And We’re Just Driving Around Cracking Nuts For Them

Machine learning programs are only as good as the data that is fed into them, and as more customer data is harvested from a variety of sources, experts say developing ethical standards around how that data is collected and utilized is paramount.

“Insurance companies have access to so much more data than in the past,” Filabi said. “This is moving so quickly.”

To contact the author of this story:
Lucy Papachristou in New York at lpapachristo@bloomberg.net