Harnessing the power of predictive modeling in underwriting

Narrator: Welcome to IB Talk, the leading podcast for the insurance industry across the United States, brought to you by Insurance Business.

Jia: Hello and welcome to the latest episode of IB Talk. I’m Jia Snape, news editor at Insurance Business. Today we’re talking about an exciting innovation that is changing the game in insurance. Predictive modeling allows carriers to analyze data in real time and underwriters to make more accurate decisions using techniques such as data mining, artificial intelligence and machine learning. But like any piece of technology, there are ethical considerations that come with leveraging predictive modeling and underwriting. I’m pleased to be joined by two experts who can help us shed more light on this topic. First, we have Christine Byun, director of product at Verikai. Christine has over 15 years experience launching and scaling enterprise products. We also have Justen Nestico, director of Solutions Consulting at Verikai. Justen is an actuarial and data science consultant who is passionate about transforming health care using AI. He’s also a member of the Society of Actuaries. Welcome to IB Talk.

Christine: Hi, it’s great to be here.

Justen: Thank you for having us.

Jia: Great to have you both. So first off, how is predictive modeling changed the underwriting process and what benefits does it offer to insurance companies and policyholders alike?

Christine: Yeah, this question is top of mind for our product team, not just how things have already changed, but we’re obviously always thinking about how we can continue to provide more tools and improve the accuracy of risk assessment going forward. So we’ve seen predictive modeling being adopted more and more, especially in the last few years. Teams are starting to adopt one or even multiple tools to support their processes. And I’d say a few years ago, our conversations started with a basic overview of how predictive modeling works. Sort of trying to explain that it could augment and improve processes beyond that traditional way of doing things. But now we spend a lot less time on that, and most folks already have a baseline understanding of the benefits you can see. So I’d love to chat a little bit more about that today. One outcome of these models is really to identify high risk groups or individuals. So surfacing this information can help underwriting teams flag those high risk groups and appropriately price them. It also helps them ultimately improve loss ratios and avoid unexpected claims. On the other hand, models can also identify the low risk groups or individuals. And this has a different sort of process where teams can use this insight to fast track a particular RFP. So imagine setting preferential pricing, locking in those customers with the best risk. And because of the models providing this information in real time, you can turn around very competitive quotes very quickly and that can translate to increased conversion rates and new business especially great because that new business is very low risk kind of underneath all of this is just the benefit of improved efficiency as a whole. So traditional underwriting, I think we all know it can be pretty cumbersome depending on the tools available to the teams. It can involve pulling data, searching through it manually, finding different patterns and trying to make connections between all that information. And ultimately it’s just a lot of time and effort in collating data and spending hours and hours per risk to do all of that. But with a predictive model, you can have that immediate insight. It’s the thing that pulls together multiple data sources for those teams. It can also expose information about what factors went into that risk evaluation, and it can help underwriting teams understand why a group or individual might be high or low risk. So, for example, something Verikai shows is information like what medical conditions that are likely present or what high risk behaviors to watch out for. And I like to think that this sort of information gives an underwriter superpowers and it helps them spend time what they want to do, what they do best, rather than going through all of the data collation collection and cleansing. On the other question of what benefits modeling can have on policyholders, it really helps carriers, quote, more accurate premiums. So rather than having the more broad stroke, demographic based pricing, which can inherently be biased or actually just be very inaccurate in representing an individual’s risk, I can provide a much more comprehensive and accurate view. So I’ll give one example. We found this for a medical stop loss customer. This was a woman of childbearing age and the traditional manual put her into that bucket. So she got a rate of over $300 a month just because she’s in that category. That includes a high likelihood of pregnancy and that ends up being one of the highest costs. Our model, however, had a lot more information about her. The fact that she recently purchased a two door car, she was traveling very frequently. She recently changed jobs. Our model incorporated a lot of little variables into the prediction of a lower likelihood of pregnancy over the next 12 months. So that actually put her into a much lower predicted claims cost of a little over $40 a month. So yeah, we’ve seen benefits across the board for carriers. We’ve seen improved efficiency, better loss ratios, increased new business. And then for policyholders, I’d say a more fair and appropriate individualized premium.

See also  Why women need to keep raising their hands

Jia: Interesting. And Justen, what’s your perspective on this?

Justen: Yeah. So I agree with everything that Christine said. The only thing I’ll add is that there’s one use case I’m really excited about, and that is for at least best in class carriers. What we’re seeing now is then going beyond just using analytics and AI to better price risk to going to the next step and really trying to manage the risk that’s already on their books. An example of what I’m talking about is in the wake of COVID, when there were lockdowns and all nonessential services at hospitals were canceled. We saw the rates of screening for just about every type of cancer really fall through the floor. And as a result of that, the number of people diagnosed with cancer really fell through the floor. But it’s not like those cases of cancer disappeared altogether. They just weren’t being diagnosed because people weren’t going to the doctors. What we saw, at least with some carriers, is as things begin to open up again. They wanted to really be able to identify which of those individuals on their book were the type of people that were at highest risk for developing those cancers, and then using clinical staff to try to reach out to them and encourage them to go get their screenings. The question when you do something like that is how do you triage among all the people on the carrier’s books? For carriers, the ability to use analytics and AI to specifically target the individuals that were at highest risk to help better target the efforts of their clinical staff was really, really beneficial. This is a use case that I expect to get more and more popular over time, especially as the number of these really high cost drugs continue to come through the FDA pipeline and enter the market. For members of an insurance plan or patients, the benefit is really being diagnosed earlier. For some of these more severe conditions. Being diagnosed earlier means lower cost and it also means a higher rate of survival. And of course, for our payer customers, they’re always interested in ways to reduce cost as well. So to the extent that they can use AI to help individuals get diagnosed earlier and use simpler or cheaper drugs or cheaper treatments, it’s kind of a win win situation between both the member and the carrier.

Jia: Data is really important, really key to get all these benefits from predictive modeling. What data sources do insurers typically use for predictive modeling, and how do they ensure that the data is accurate, relevant and unbiased?

Christine: That’s a great question. I mean, we’ve seen insurers use a variety of data sources. Everyone looks at things like historical data, key policyholder information, demographics, claims, information, when that’s available. More and more carriers are also looking to bring in all sorts of new data sources to help with that risk assessment process. And in the last several years, just the vast amount of data available has increased a ton. So there’s a lot of options to choose from. More data feeds that we’re looking at. Medical data, including prescriptions and behavioral data is also being incorporated. That’s actually where Verikai has a very unique set of information. Without giving away our secret sauce, I’ll describe some of the categories we use things like purchase information, point of sale information, financial or credit data, life events like births and marriages, social media, online behavior. So you can see that there’s a ton of information out there that has the potential to be useful in the underwriting process. So there’s a lot out there and it’s great that there are so many choices, but that’s obviously leading to some challenges. First of all, how do you even know what sources are actually going to be relevant or valuable for the risk assessment in the line of business you’re responsible for? I think this question is really important because there’s actually a ton of time and energy and resources that needs to go into researching and evaluating these sources. There’s also a ton of variance in the accuracy and consistency of data between all of these sources. I mean, we’ve seen some of these sources require a ton of post-processing and transformation before it can even be pulled into our database for our models to use. What we’ve seen is data ingestion and cleansing. All of those processes is extremely time consuming. Different sources have different methods of retrieval, different formats, and cleansing the data requires the removal of duplicates, identifying and correcting errors. Only after all of that’s done can you pull that into your data warehouse or your data lake. And then before modeling, you need to remove specific fields such as protected classes. And I’ll also mention data compliance. On top of all of this, you need robust data security and privacy policies. So there’s a lot of things, folks who are currently in the process of trying to operationalize data have probably run into. I imagine everyone’s seen at least some of these challenges. And I think another interesting aspect to consider is not just on the data itself, but just making that data actionable. More data is great, but data just for data’s sake can actually slow things down. So that’s where I think a technology vendor can really help. They can remove almost all of those challenges from the picture. So, for example, we spend all our time on these problems. We are constantly looking at new data sources, especially the ones that insurers may not typically have used before We analyze them for relevancy to our models. And then we continually add the most impactful data to our database. And then we’re constantly training our models to improve those risk scores, improve the insights that we’re providing.

See also  HDI Global SE announces leadership and structure shake-up

Jia: That’s great advice. And Justen, what’s your take on, you know, getting the right data sources for for these risk assessments?

Justen: Well, Christine raised a lot of good points. The only thing I’ll add is that everything she mentioned in terms of checking the data and processing it and removing the protected classes, it’s not a one time thing. That’s something that needs to be happening continuously. You want to make sure that as people’s behaviors change over time, both at the individual level as well as societal level, that it’s not causing new bias to be introduced that wasn’t there originally or for the models to become more biased over time as there’s a sort of drift in people’s behaviors. The fact that it’s something that needs to be done continuously, I think is one of the big benefits of looking to a third party vendor to do it, at least for carriers that don’t necessarily have the staff or want to devote the staff to specifically handle all of those those tasks associated with making sure the models and the data are fair and unbiased.

Jia: Great. And just as a last piece, you know, there are really some ethical considerations an insurer should be keeping in mind. What are some of these and how can insurers, you know, make sure that their algorithms don’t perpetuate or reinforce discriminatory practices?

Justen: That’s a good question. The the most interesting thing to me is not the data or the models or the algorithms. It’s still the human in the role of human judgment in this entire process. Because without getting that right, you’re inevitably going to have bias or you’re going to have privacy issues or a lack of transparency. Now, I think there’s a sort of narrative that as carriers become more data driven or they begin relying more on models, that there’s a reduction in the the human in the loop there. So there’ll be less reliance on on human judgment. I think that’s a false narrative. In my experience adopting, I really just shifts where the human is making those decisions. And what I mean by that is that modeling is as much an art as it is a science. So building any sort of complex model requires a lot of really small judgments by whoever is doing the analysis on how do you treat each field and do you remove outliers or which fields can and can’t be included, And then that all flows through into the model and then the end results. As models get more and more complicated or as more data gets added to the model, it really places more importance on the person handling that data or designing that model to make the right choices and to ensure that the bias isn’t being introduced in any of those steps in the modeling process. Now, as an industry, I think the the insurance industry is well positioned because actuaries have traditionally handled that role. For actuaries, there are a strict set of professional standards that they’re required to operate by the standards of practice, or the ESOPs and actuaries take those seriously and there are real professional repercussions if an actuary does not follow those with an actuary handling the entire modeling process, or at least overseeing it. The insurance industry can feel confident that there is somebody with both the technical experience to understand at a minute level every step of the process. But who’s also thinking big picture about what this mod or what this data actually means to the carrier. And at each step of the process where bias could be introduced for the actuary, it’s really incumbent on them to ensure that they’re acting in accordance with their professional standards and ensuring that what comes out of the model is reasonable and appropriate for the use case. Now in terms of. Actual steps to reduce the bias. Christine mentioned one earlier, which is just straight removing protected classes fields from the data set. But it’s possible because things like race or gender are so correlated with other behaviors that people exhibit. It would be possible to have a biased model even after stripping those out. So you’ll still want to do is ensure that whatever model comes out. After stripping out those protected fields, you want to ensure that the results of that model still aren’t overly correlated with these protected classes. Doing that through statistical testing allows you to feel confident that you’re not still discriminating or picking up bias through some of these proxy variables that are correlated with things like race, for instance.

See also  Is HealthCare Gov same as Obamacare?

Jia: Absolutely. And you’ve certainly given us a lot to think about with some some food for thought there. Thank you so much for sharing your insights with us today, Christine and Justen. I really appreciate you coming to the podcast.

Christine: Thanks for having us.

Justen: Glad to be here.

Jia: And that’s the end of this episode of IB Talk. Thanks for being with us. I’m Jia Snape, news editor of Insurance Business. See you next time.

Narrator: Thank you for listening to IB talk. For the latest episodes, be sure to follow us on all major listening channels.