Must-Do Steps to Prepare for AI Compliance

Compliance theme with the Bay Bridge in San Francisco

What You Need to Know

Financial services regulators are approaching artificial intelligence as though it will have a far-reaching impact in 2024.
Firms would be wise to categorize AI systems into risk levels based upon their potential significance.
Firms should also implement policies and procedures around responsible AI use.

Federal and state governments in the United States and abroad have started laying the groundwork for the regulation of artificial intelligence. Based on the disruptive nature of the technology and its mass proliferation, individual industries are determining their own guidelines as well.

Financial services is no exception. In fact, financial services regulators are already approaching AI as though it will have a far-reaching impact in 2024 and beyond. This article examines those initial efforts and how financial services firms can best prepare for these general and industry-specific changes.

Overarching AI Regulation

The AI Safety Summit in November was a global gathering to establish AI guidelines. Twenty-eight countries attended, resulting in the Bletchley Declaration: an agreement to commit to identify AI safety risks and build risk-based policies to mitigate “frontier” AI risks.

Nearly concurrent with that summit, the Biden administration issued an executive order on AI. Besides establishing an overall tone, the order directs various cabinet members to research and establish guidelines associated with AI.

This ranges from ordering the Treasury to establish best practices to manage AI-specific cybersecurity risks to directing Homeland Security to establish an AI safety and security board.

While agencies have the ability to tailor guidelines, much of the executive order relies on various cabinet members conducting their own investigations into deploying plans for AI risk mitigation.

See also  Husband’s motorcycle purchase. What to do about life insurance?

Notably, this lack of specificity could provide latitude for agencies to create potential roadblocks for businesses — including financial services firms — looking to leverage AI for operational efficiency, and other benefits.

Outside of the United States, the European Union has passed the EU AI Act, which will have a similarly wide-ranging impact on organizations. Since the final text of the regulation hasn’t been released yet, many affected firms are in a wait-and-see mode. There likely will be a long lead time before the rule goes into effect, so firms may have sufficient time to adapt to the rule.

Based on the existing text, however, firms would be wise to prepare for a risk-based approach, categorizing AI into risk levels based upon their potential impact. Generative AI, for example, would have a limited impact, while AI systems in charge of infrastructure would be considered high-risk.

Finserv Oversight

Financial services regulators have been proactive, having already provided guidance, and will continue to ramp up their positioning on the matter.