How to comply with anti-discrimination laws when using AI
Elayne Grace, chief executive of the Actuaries Institute, said the collaboration demonstrates the complex nature of society’s issues and the need for a multi-disciplinary approach, particularly where data and technology are used to improve the provision of fundamental services such as insurance.
Human Rights Commissioner Lorraine Finlay added: “With AI increasingly being used by businesses to make decisions that may affect people’s basic rights, it is essential that we have rigorous protections in place to ensure the integrity of our anti-discrimination laws. But without adequate safeguards, there is the possibility that algorithmic bias might cause people to suffer discrimination due to characteristics such as age, race, disability, or sex.”
Read more: Actuaries Institute delivers winter 2022 climate index
An Actuaries Institute survey this year found that at least 70% of the respondents indicated the need for further guidance to comply in the emerging area or wider use of AI.
Grace commented: “Australia’s anti-discrimination laws are long-standing, but there is limited guidance and case law available to practitioners. The complexity arising from differing anti-discrimination legislation in Australia at the federal, state, and territory levels compounds the challenges facing Actuaries and may reflect an opportunity for reform.”
Actuary Chris Dolman, who led the Actuaries Institute’s contribution to the preparation of the guidance resource as a representative of the Data Science Practice Committee, outlined some strategies for insurers using AI systems to address algorithmic bias and avoid discriminatory outcomes. These strategies include rigorous design, regular testing, and monitoring of AI systems.
“In the insurance context, AI may be used in a wide range of different ways, including in relation to pricing, underwriting, marketing, customer service, including claims management, or internal operations,” he said. “This guidance resource focuses on the use of AI in pricing and underwriting decisions, as these decisions are already likely to use AI and, by their nature, will have a financial impact that may be significant for an individual. Such decisions may also be more likely to give rise to discrimination complaints from customers. However, many of the general principles outlined may also apply to the use of AI-informed decision-making in other contexts.”