Regulator looking into the ‘black box’ of auto rating models

Data analysis concept

Auto insurance rating models are becoming so complex that Ontario’s regulator is looking not only at the inputs of these models, but now the outputs as well, attendees heard at an industry event last week.

“There are concerns that these systems will be so complex over time that we don’t understand them,” said Carole Piovesan, co-founder of data law and consulting firms INQ Law and INQ Consulting. “And we’re putting in place certain mechanisms to try to avoid that, including a whole new market around creating AI systems to assess AI systems, to explain those systems.”

The complexity of these rating models, exacerbated by artificial intelligence (AI), has caught the attention of the Financial Services Regulatory Authority of Ontario (FSRA), which is looking to examine not only the inputs of these models but the outputs as well.

Traditionally, regulators looked at the inputs — what could and could not be used to set a rate, Brian Sullivan, editor and owner at Risk Information Inc., said during the 2023 FSRA Exchange event on Jan. 19.

And if certain rating factors could not be used, Sullivan explained, an insurer’s hypothetical response could be: “‘Well, I can just bring this big pile of data over here and accomplish the same thing.” Added Sullivan: “Should we ignore the inputs and instead spend most of our regulatory time examining the outcomes, the outputs, of those systems?”

Tim Bzowey, FSRA’s executive vice president of auto/insurance products, acknowledged FSRA has been very focused as a rate regulation regulator around rating inputs — “effectively, what goes in the soup and maybe not so much how it tastes.”

See also  NJ RECREATIONAL MARINE INSURANCE WITH CHUBB

iStock.com/bgblue

Although the regulator meets the statutory standard of rates that are just, reasonable and not excessive, that’s different than having a focus on consumer outcomes, Bzowey said. “If I’m interested in consumer outcomes, I think by definition I have to be much less interested in rating input,” Bzowey said. “So, I don’t think I would say we would go so far as to ignore them….But I think it’s also fair to say that our current regime in the name of fairness makes an effort to restrict inputs.

“If principles-based regulation to FSRA is about consumer outcomes, then necessarily the work we do in my shop needs to be a lot more about that and a lot less about checking the math of the actuarial professional submitting the filing,” Bzowey added. “We’re moving in that direction. We’ve taken a lot of steps in that direction. But any reform of the rate regulation framework is going to require much, much more be done without a doubt.”

One panellist said more complex, AI-based ‘black box’ models are not necessarily bad. “Data and algorithms are agnostic, they’re not good or bad,” said Roosevelt C. Mosley Jr., principal and consulting actuary with Pinnacle Actuarial Resources and president of the Casualty Actuarial Society. “They simply just analyze the information that’s given to them and put the output. There are biases potentially in the process that can be bad. It can contribute to bad things.”

The possibility of regulating outputs is an important conversation to have, Mosley said. “It could potentially move the industry forward…but it’s going to require us to think about it differently. And to also be able to give people the comfort that we’re not just letting things run amok. We’re actually putting some things in on the back end to protect consumers to make sure that nothing’s going wrong.”

See also  Hyundai Ioniq 6 wagon or N model not out of the question

Piovesan added that “when you crunch all of that data through a massive, complex computing system and you have this output, you may trust the output, but you don’t understand how you got to that output.”

The new world of AI is raising the profile of principles such as fairness, non-discrimination, transparency and accountability, Piovesan said. And different jurisdictions are looking at schemes to prevent models from becoming so complex that they’re not understandable anymore. For example, Canada is proposing to set up an AI and data commissioner that will have the competency to address these systems.

“Today, we find ourselves in an era in which we are not only regulating a sector like insurance, but [also the] technology that can be used within a sector like artificial intelligence,” Piovesan said. “We are also requiring that these systems provide an explanation as to the output.”

 

Feature image by iStock.com/Canan turan