NY regulator warns of AI-driven cyber threats, urges enhanced security measures

NY regulator warns of AI-driven cyber threats, urges enhanced security measures

NY regulator warns of AI-driven cyber threats, urges enhanced security measures | Insurance Business America

Cyber

NY regulator warns of AI-driven cyber threats, urges enhanced security measures

New guidance highlights AI’s dual role in boosting defenses and enabling cyberattacks

Advances in artificial intelligence (AI) are introducing new cyber risks while simultaneously enhancing digital defense capabilities, according to guidance released by the New York Department of Financial Services (DFS).

The guidance aims to help businesses regulated by DFS mitigate AI-driven cyber threats and does not establish new regulations but is supported by existing cybersecurity rules.

AI’s dual role in improving threat detection and incident response while also creating new avenues for cybercriminal activity was highlighted by DFS Superintendent Adrienne A. Harris (pictured above).

“New York will continue to ensure that as AI-enabled tools become more prolific, security standards remain rigorous to safeguard critical data, while allowing the flexibility needed to address diverse risk profiles in an ever-changing digital landscape,” she said in a report from AM Best.

DFS warned that AI is being used to increase the speed and scale of cyberattacks, including social engineering attacks and the creation of deepfakes. The department also pointed out that less-experienced hackers are now able to launch more sophisticated attacks with AI’s help.

Moreover, AI requires large datasets for its functionality, often stored by insurers and other regulated entities, making these organizations attractive targets for cybercriminals.

To counter these emerging threats, DFS advised companies to assess AI-related risks across their operations and adjust their security measures accordingly. It also stressed the importance of monitoring vendors that utilize AI to ensure their practices align with the organization’s cybersecurity standards.

See also  Profits despite Ian shows “strength of cycle working through” – Goldman Sachs

In addition to these measures, DFS recommended implementing multifactor authentication, limiting access to sensitive data, and monitoring systems for unusual activities, such as suspicious queries or potential data misuse.

The guidance also suggested mandatory staff training on AI-driven threats, such as deepfakes, and employing data minimization strategies to limit exposure in the event of a breach.

In a related development, California recently took steps to regulate the use of AI by health insurers. The state prohibited the use of AI, algorithms, and other software to delay, deny, or modify healthcare services.

What are your thoughts on this story? Please feel free to share your comments below.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!