Why an AI ethical framework is necessary for insurers

Why an AI ethical framework is necessary for insurers

Innovations leveraging artificial intelligence (AI) have led to many news headlines and opinions ranging from enthusiastic to foreboding. Regardless of personal feelings, AI has the potential to revolutionize the way work is done within general business, academia, medicine, and possibly even the arts and entertainment sectors.

The introduction of AI chatbots has allowed industries, including insurance, to reinvent the customer service and claims processing model. Furthermore, they have increased underwriting operational efficiencies and automated fraud detection, helping to reduce risk by recognizing abnormalities in claims data.

However, as with any new technology, AI isn’t without risks, and some of the most concerning are ethical in nature, especially when AI is used in a social context. Knowing this, 98% of executives across sectors had some plans to make their AI responsible in 2022. Technology has the potential to improve our lives, but we should also be aware of the harm it can cause when the right parameters are not in place.

The social context

While autonomous industrial machinery with limited human interaction might have little to no social context, insurance affects people. For example, insurers can leverage AI to forecast demand trends. As explored in a recent report by the Society of Actuaries Research Institute, “Avoiding Unfair Bias in Insurance Applications of AI Models,” if insurers lack historical data for traditionally unexplored segments of the population, the models could exclude some customer groups, resulting in products failing to meet all needs effectively. 

The decision-making process of an AI-informed model includes the algorithm design, data element types, and end users’ interpretation of results. There is a risk for bias if any of these elements aren’t clearly understood. For example, if a company is unaware that its data sets are too simplistic or outdated, the results can be biased. Additionally, the large amounts of data and multivariate risk scores used in micro-segmentation can be complicated and opaque. Not understanding what drives a model’s decision-making can unintentionally result in discrimination.

See also  You'd Be Silly Not To Send It With This Extremely Nineties Pontiac Drag Car

Internal guardrails

When an organization builds an ethical framework to prevent discrimination in AI applications, leaders should start with a flexible governance structure that can guide both today’s environment and future possibilities, such as new regulations and changing stakeholder and customer expectations.

Individuals building or working with AI models can benefit from following the evolving regulatory landscape and any internal policies established by the organization. Doing so will help confirm AI development aligns with the organization’s objectives and risk tolerance and helps to reduce unintended consequences. Insurers also can modify their AI governance structures to suit their business objectives. By engaging a broad range of stakeholders in discussions around AI governance, insurers can achieve a more nuanced understanding about how AI is used in the organization and the related risks.

Additionally, providing ethics training can help organizations define unfair bias in the context of AI models and bolster employee understanding of the regulatory and ethical requirements. These efforts also require conducting a model risk assessment to determine the necessary levels of scrutiny and controlled parameters. The AI model’s risk tier that results from the assessment will dictate the design and development, including risk mitigation strategies.

Preparing for the future

Like the rest of the world, insurance companies are increasingly relying on AI, and this reliance will continue to grow. Actuaries will deliver insights derived from AI models more rapidly and across new use cases, increasing the potential for inadvertent discrimination. Therefore, the importance of implementing a robust set of processes and controls is imperative. A framework of ethics can go a long way in mitigating the risks of unfair bias throughout all the stages of AI use and development.

See also  FAQ for Users