2024 Cyber Risk Predictions

We’re closely watching how the courts will respond to a recent spate of generative artificial intelligence (AI) and privacy-related litigation, evaluating how different global markets will respond to regulatory changes and thinking about how cybercriminals will evolve their tactics. And, of course, we’re preparing for the myriad ways all of these emerging trends will impact businesses.

What lies ahead for the industry?

This year, we’re closely watching how the courts will respond to a recent spate of generative artificial intelligence (AI) and privacy-related litigation, evaluating how different global markets will respond to regulatory changes and thinking about how cybercriminals will evolve their tactics. And, of course, we’re preparing for the myriad ways all of these emerging trends will impact businesses.

At the beginning of 2023 [LINK to CSS, we predicted that greater incident complexity, class actions for cyber extortion and increased use of security measures like multi-factor authentication (MFA) would be the big themes – all of which did indeed bear out over the course of the year.

Will our 2024 predictions be equally prescient? That remains to be seen. But regardless of how things unfold, we will be here to track the trends, share our insights and help our broker and client partners stay informed so that you can make the very best cyber security decisions possible. Thanks for being with us for the journey.

Patricia Kocsondy
Head of Global Cyber Digital Risks
Beazley

AI’s considerable impact will play out in the courts in 2024

With high-profile lawsuits pending against key players like Open AI and Meta, we are likely to see US courts rendering a patchwork of decisions over the coming months in several key categories:

  1. Privacy: Cases are arising out of the use of “data scraping” technology to train AI algorithms (see for example: the FTC investigating ChatGPT creator OpenAI over consumer protection issues | AP News). Data scrapers extract information from websites that is then sold to train large language models, which can result in the collection and dissemination of sensitive information.
  2. Copyright/Intellectual Property: Artists and authors are alleging that copyright laws are violated by the training of AI models on their work without providing any compensation (see for example the NY Times’ current suit against OpenAI). These cases could force large tech companies to change the way AI is built, trained and deployed so that it is fairer and more equitable.
  3. Libel/Defamation: There are questions arising around who bears responsibility when AI produces false, reputation-harming information, as in this recent incident at a local car dealership. Even more serious is the potential use of these tools to spread disinformation and to create deep fakes.
  4. Fraud/Breach/Ransomware: The availability of new AI tools will impact claims involving fraud, data breaches and ransomware.

Regulatory changes on a global scale may be expected to influence behaviours

What happens in one region could impact other regions around the world, including possible ripple effects to cover and policies. To help keep our policyholders stay informed, we regularly monitor legal and regulatory developments across the globe.

In Portugal, the Insurance and Pension Funds Supervisory Authority (ASF) has recently declared that insurance contracts indemnifying the ransom payment associated with cybercrimes are not legally permissible, due to violation of Portuguese civil law.

Australia will be implementing mandatory reporting requirements for ransomware. The federal government is keen to understand which organizations are being targeted because ransomware costs the Australian economy up to AU$3 billion (USD$1.9 billion) in annual damages.

France has also implemented mandatory reporting requirements, but only for cyber security incidents covered by a cyber policy. The law requires insureds to file a complaint within 72 hours of becoming aware of a system compromise.

In the US, under the SEC’s new cyber security disclosure rule, public companies must now disclose the existence of key details surrounding a cyber security incident within four business days of determining the incident is material.

Additionally, the FBI has announced that it will increase the number of agents deployed to American embassies to focus on cyber-related crime. This increase will bring the total number of agents in foreign countries to 22 and is designed to improve the FBI’s efforts to combat international cybercrime.

Privacy and tracking claims are likely to reach a tipping point

Privacy will be a continued and exacerbated theme for 2024, particularly in the US, where more privacy and tracking claims are anticipated. This is less of a global issue for now, but we continue to track developments in other jurisdictions where class action mechanisms are in place; for example, we have seen an increase in data privacy class actions in Australia. Fortunately, most other global markets are still protected from mass litigation and class action litigation thanks to “loser pays” rules and a general reluctance of the courts to open the floodgates to mass litigation.

In addition to the aforementioned suits arising out of generative AI, we anticipate that facial recognition tools will also be in the hot seat in US courts. This will likely include more claims under the Illinois Biometric Information Privacy Act (BIPA) related to facial scanning, as well as an increase in geolocation claims due to vehicle tracking.

With large-scale privacy-focused class actions, plaintiffs will look to find a hook with old statutes that provide for statutory damages, like federal and state wiretapping laws and the Video Privacy Protection Act (VPPA). As the year progresses with more decisions from the courts being made, hopefully some potential class actions will be curtailed.

Attackers will employ a wider range of strategies and tactics

Cybercriminals are constantly evolving their tactics to increase pressure on their victims, as they seek to maximise the monetary value and impact of their attacks.

Employees will require additional training on AI risks as these continue to evolve in 2024. Human resources teams, for example, should be prepared for cybercriminals to make use of AI bots to gain employees’ trust.

Organizations should also be aware that cybercriminals are starting to publish leaked data on the public Internet, making this data more accessible to the public and thus increasing the pressure to pay a ransom. An organization named on a cybercriminal’s blog can become a target for other cybercriminals who might reach out asking for a ransom payment, falsely claiming to be the group who performed the hack.

Other risks with data being exposed publicly include public data impacting merger and acquisition (M&A) strategy or repudiating intellectual property rights, especially when trade secrets are stolen.

AI will increase the threat landscape in 2024

Regulation will continue to evolve over the course of the year and could impact the ability of insurance to provide the level of coverage that it currently does in some territories. There will also be greater pressure on firms who suffer a data breach or cyberattack to notify official privacy bodies, which could create additional knock-on effects following an incident.

Simultaneously, cybercriminals are becoming more aggressive in their attacks as they seek new ways to force companies to pay ransoms and monetise the data they steal.

Given these factors, having an experienced cyber insurer that offers risk management services on your side will be more important than ever. By taking proactive risk management steps, organizations will help to reduce the likelihood of an attack and be in the best position to avoid all of the financial, managerial and reputational damage that this can cause.

“Data scraping can result in the collection of sensitive information. The US Federal Trade Commission has opened an investigation into whether OpenAI violated privacy and consumer protection laws by scraping people’s online data to train ChatGPT. There are also lawsuits against OpenAI alleging privacy violations as a result of the data scraping that was used to train the model.”

Melissa Collins Claims Focus Group Leader, Cyber & Technology Third-party Claims Beazley

“In addition to country-specific legislation, on 8 December 2023, the European Commission, the European Parliament and the European Council reached a political agreement on the terms of the European Union Artificial Intelligence Act (the “EU AI Act”). The final text of the EU AI Act is not yet available officially; once it is made public, we will be able to assess any possible impact that it will have on clients and their cyber insurance policies.”

Sandra Cole Claims Focus Group Leader – London Market and International Cyber Beazley

“It’s a pivotal time for privacy issues. With the lack of fines from regulators, coupled with recent and upcoming enforcement actions, this is a good time to stay close to your cyber insurer. We're here to help ensure you're educated about the issues and prepared for their potential implications.”

Andrew Girman, Cyber Services Manager, Philadelphia

“We have witnessed an escalation in cyberattacks targeting critical infrastructure, notably water treatment facilities. One threat actor, in particular, has abandoned previously held 'rules of engagement,' signaling a further disregard for ethical boundaries. Consequently, this could put other critical assets, such as hospitals and nuclear power facilities, at higher risk. It's a stark reminder of the increasingly perilous landscape in cyber security, demanding vigilance and robust protective measures.”

Max Bradshaw, Cyber Services Manager, Chicago

More Insights

View All >