Guy Carpenter warns AI adoption brings new cyber aggregation risks

Guy Carpenter warns AI adoption brings new cyber aggregation risks

Guy Carpenter warns AI adoption brings new cyber aggregation risks | Insurance Business America

Reinsurance

Guy Carpenter warns AI adoption brings new cyber aggregation risks

AI deployment methods heighten vulnerabilities, increasing risks across the supply chain

Reinsurance

By
Kenneth Araullo

As the adoption of artificial intelligence (AI) accelerates, its deployment methods are evolving, bringing new dynamics and increasing the potential for cyber event aggregation risks.

The report, part of a broader analysis by Guy Carpenter, underscores that AI presents an additional threat to the software supply chain. For businesses using third-party AI solutions, such as ChatGPT or Claude, the deployment of AI models within a customer’s network or through external hosting can introduce significant risks.

A compromise or degradation of a third-party AI model could result in outages or compromises, making the AI vendor a single point of failure for all customers using that model. The report cites several instances, including multiple outages of ChatGPT in 2023 and 2024, as well as the compromise of the PyTorch machine learning library in December 2022, as examples of such vulnerabilities.

Guy Carpenter further elaborates on the new attack surfaces introduced by AI deployment. Once deployed, AI models interact with users, making them susceptible to manipulation. Techniques like “jailbreaking,” where models are tricked into behaving outside their intended boundaries, can lead to data exposure, network breaches, or other significant consequences.

The report also references incidents such as the February 2024 vulnerability in an open-source library that led to stolen ChatGPT interactions and Air Canada’s payment of damages due to incorrect information provided by an AI chatbot.

See also  USAA chief executive Wayne Peacock retiring

The report also identifies data privacy as a critical area of concern. AI models often require large, sensitive datasets for training, which necessitates the exposure, replication, or availability of data to training pipelines. The success of AI technology is closely tied to the scale of the data it can access, which creates strong incentives for data aggregation.

However, this centralization of data also increases aggregation risk. Guy Carpenter notes a September 2023 incident where AI researchers accidentally exposed 38 terabytes of customer data due to a misconfiguration, illustrating the potential impact of such risks.

In the realm of cybersecurity, AI is increasingly being used in security operations, including response orchestration. While AI can offer rapid, automated responses to threats, Guy Carpenter warns of the risks associated with granting AI systems high-level privileges.

The report mentions a recent outage at Microsoft, potentially exacerbated by the company’s automated response to a malicious DDoS attack, as a cautionary example. The report emphasizes the need for guardrails, checks, and human oversight in AI-driven security responses to avoid unintended consequences.

Looking forward, Guy Carpenter suggests that as companies continue to integrate AI into their operations, the insurance and reinsurance sectors should view this trend as an opportunity for growth rather than a risk to be avoided.

The report draws parallels between the current AI adoption wave and the transition to cloud services a decade ago, noting that while both present significant opportunities, they also introduce new dependencies on third-party providers. By assessing AI deployment through the lens of third-party risk, companies can better evaluate and manage the potential challenges associated with this transformative technology.

See also  AXIS targets $100m+ Northshore Re II wind and quake cat bond

What are your thoughts on this story? Please feel free to share your comments below.

Keep up with the latest news and events

Join our mailing list, it’s free!