ASIC Chair Joe Longo raises concerns about AI regulations

ASIC Chair Joe Longo raises concerns about AI regulations

ASIC Chair Joe Longo raises concerns about AI regulations | Insurance Business Australia

Technology

ASIC Chair Joe Longo raises concerns about AI regulations

Gaps identified to enhance consumer protection and industry integrity

Technology

By
Roxanne Libatique

Australian Securities and Investments Commission (ASIC) Chair Joe Longo has expressed concerns about the adequacy of current regulations surrounding artificial intelligence in Australia.

In a recent speech at the UTS Human Technology Institute Shaping Our Future Symposium, Longo emphasised that existing laws might fall short in preventing AI-related harms before they occur, necessitating additional efforts to ensure a robust response after such incidents take place.

Quoting from the federal government’s interim report on AI regulation, Longo highlighted the evident gap between the current regulatory landscape and the desired ideal.

Is the existing regulation enough?

Despite the prevailing regulatory framework, Longo questioned whether it is sufficient to address the challenges posed by the accelerated development of AI technologies.

He acknowledged the immense potential benefits of AI, estimating a substantial contribution to Australia’s GDP by 2030. However, the rapid pace of AI advancements raises crucial questions about transparency, explainability, and the capacity of existing regulations to adapt promptly.

“It isn’t fanciful to imagine that credit providers using AI systems to identify ‘better’ credit risks could (potentially) unfairly discriminate against those vulnerable consumers. And with ‘opaque’ AI systems, the mechanisms by which that discrimination occurs could be difficult to detect. Even if the current laws are sufficient to punish bad action, their ability to prevent the harm might not be,” Longo said.

See also  What is the most popular type of auto insurance coverage?

Recommendations

Longo emphasised the need for transparency and oversight to prevent unintended or unfair practices in various sectors, including insurance. He also suggested potential solutions, such as red-teaming and “AI constitutions,” while acknowledging their vulnerabilities.

Longo raised the possibility of mandatory “AI risk assessments,” akin to the EU approach, emphasising the need to ensure their effectiveness in preventing harm.

“These questions of transparency, explainability, and rapidity deserve careful attention. They can’t be answered quickly or off-hand. But they must be addressed if we’re to ensure the advancement of AI means an advancement for all. And as far as financial markets and services are concerned, it’s clear there’s a way to go in answering them,” he said.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!