Risk analysis as a prerequisite for trusting AI

Risk analysis as a prerequisite for trusting AI

AI is transforming the financial system. A recent global report from KPMG highlighted the growing use of AI in financial reporting, with a surge in the adoption of generative AI tools. While these technologies offer promising opportunities for enhanced analysis and automation, the report also reveals that 40% of respondents have legitimate concerns about using generative AI due to risks such as bias, hallucinations, and cybersecurity vulnerabilities. 

Why AI evaluation matters

As with all moments of immense change, generative AI comes with great and growing responsibility for boards and C-suite executives. Risk management systems will either by law or best practice require an AI category for risk mitigation. Those companies responsible for governance and want to be ahead of the regulatory curve need to understand how AI is deployed, used, and managed internally and externally. The buck ultimately stops with the board – but are boards truly ready for this new and growing responsibility or, as some describe, liability? 

Anecdotally, boards are either fearful of what is ahead or curious about the opportunity that is emerging. Despite increasing legislation, including the EU AI Act, that U.S. companies must comply with to trade in the EU’s single market, and over 78 evolving state and federal laws in the U.S., the responsibility for managing AI risk falls firmly on company leadership. 

The need for keeping humans in the loop

To mitigate risk and harness the promise of AI, a fundamental best practice is to evaluate and assess AI systems or tools before using them. This will help businesses ensure trust, mitigate future risks, and adhere to compliance. While it’s simpler to manage when AI is developed in-house, if an external partner is needed, thorough due diligence is essential before purchasing any AI external system or tool. 

See also  2023 BMW XM SUV Is 644-Hp, 168 Mph Hybrid-Electric Hostile Architecture

This process should also involve human oversight for verification and reassurance. Recently there has been lots of discussion about keeping humans in the loop in AI systems but not much practical action to demonstrate that this is a priority in practice. Having a human in the loop is critical to ensuring AI safety and mitigating AI risk.

Identifying AI’s potential risks

Other issues to prioritize are biases in training data and algorithms that can lead to unfair or discriminatory outcomes, which is particularly concerning in domains like insurance underwriting. Businesses must implement techniques like bias testing, fairness constraints, and diverse data sampling to identify and address potential discrimination in their AI models. If you can’t explain these models to a regulator or even a customer, you should be careful using them at all. AI models can be vulnerable to adversarial attacks, data poisoning, or prompt injection, compromising their integrity and reliability. Thorough testing and evaluation such as ‘red teaming’ can help test an AI model’s robustness, particularly in mitigating harms.

The path to AI compliance

Risk analysis and mitigation should not be taken lightly. It needs to be completed in a comprehensive way, taking into account the various factors that constitute an AI model. This includes visibility, transparency, integrity, optimization, effectiveness, and legislative readiness. Adopting this all-inclusive approach is essential for determining whether an AI model can be relied upon to yield optimal outcomes without posing any societal harm.

The financial services industry is already subject to numerous regulations and compliance requirements, such as data privacy laws, financial reporting standards, and industry-specific guidelines. Businesses must assess their AI models’ compliance with these regulations and ensure their use aligns with legal and ethical principles. Beyond ethical ramifications, there might also be legal fines to consider as a consequence of failing to comply – an avoidable cost to any business.

See also  DealerPolicy Debuts Polly, A New and More Consumer-Friendly Brand

We stand at the dawn of a new era where AI is changing our world and every business. To fully harness the opportunities AI presents, and trust the outcomes, companies must consider their risk appetite, partners, and ultimately understand their customers’ attitudes toward AI. With so much fearfulness, rightly or wrongly, some customers will want to know how involved an AI has been in decisions that impact their individual rights. C-suites and boards will require a vigilant eye to harness the potential that AI brings, and the clear associated risks.