📊 Our 2025 Cyber Claims Report is out now!
Skip To Main Content
Cyber Incident? Get Help
Blog homeCyber InsuranceSecurityExecutive RisksBroker EducationLife at Coalition

AI Advancements Are Reshaping Cyber Insurance Coverage

Person > Tiago Henriques
Tiago HenriquesJuly 16, 2025
Share:
Coalition Blog AI-Advancements-Cyber

Forward-thinking businesses aren’t the only ones using artificial intelligence (AI) to work smarter and move faster.

Threat actors are turning to AI to enhance their social engineering tactics, like deploying convincing (automated) phishing emails or creating deepfakes that mimic the voices and faces of trusted colleagues. And as more businesses look to implement AI systems to improve their own productivity, threat actors are eager to poke for exploitable weaknesses in this new technology.

The Wild West of generative AI is here. So, as “bad guys” optimize attack methods, how can everyone else reduce their risk? One answer is forward-thinking insurance coverage that addresses sophisticated AI-powered attacks and enhanced cyber risks. 

Below, we’ll examine why AI-related cyber incidents necessitate the evolution of cyber insurance policy language and how to determine if your coverage adequately meets today’s risks.

Social engineering is on the rise

Phishing emails have skyrocketed by 856% over the last several years with the help of large language models (LLMs), like ChatGPT.

Social engineering scams have been around since the dawn of the web, but tell-tale signs like poor grammar and formulaic messages (the infamous Nigerian prince) are on the way out in favor of AI-enhanced communications. Threat actors can now personalize messages quickly by using AI to scrape social media pages and corporate websites, tailoring information and tone to specific users. 

And with AI, they can do so at scale. LLMs automate the entire process by crafting emails, identifying targets, and collecting information, ultimately cutting the cost of deploying scams by up to 95%

Phishing emails have skyrocketed by 856% over the last several years with the help of large language models (LLMs), like ChatGPT.

Threat actors are also turning to deepfake technology to manipulate images, audio, and video recordings. Last year, an employee at a multinational finance firm sent $25 million to threat actors after “meeting” with the company’s supposed chief financial officer in a conference call. In another well-publicized attempted deepfake scam, threat actors impersonated the CEO of a large advertising group in a Microsoft Teams meeting, in order to try to solicit money and personal details from an agency leader.

Not all cyber insurance coverage is built to address the escalating risk of AI-fueled social engineering. Losses arising from deepfakes can land in a coverage “gray area” between cyber and crime insurance. 

Cyber insurance doesn’t always include coverage for impersonation fraud, and with the rise of deepfakes, some insurance providers are moving to include explicit exclusions for these incidents. While enhancements in crime insurance coverage have occurred to cover social engineering losses, not all policies have broad “all-risk” language which could leave deepfakes as a potentially unprotected avenue of fraud. 

AI chatbots are vulnerable to attacks

To find answers when browsing the web, 68% of people have turned to an AI chatbot. From retailers to hospitals, more and more businesses are implementing virtual assistants for lead generation, customer engagement, and 24/7 availability.

Most customer support chatbots operate with guidelines that keep provided outputs relevant. However, LLMs cannot reliably distinguish between malicious user input and system instructions.

Cleverly crafted prompts from an attacker can result in the chatbot revealing sensitive information not intended to be shared. For this reason, the Open Worldwide Application Security Project (OWASP) ranked prompt injection as the number one security AI risk in 2025. 

Consider this: A hospital creates a customer service chatbot using AI. Patients send queries and the system accesses internal databases to answer them. But a threat actor sends a prompt injection that tricks the system into sharing sensitive patient health information. The hospital now has a security failure that likely requires a digital forensics investigation, legal counsel, and patient notification.

Without clear policy language, traditional cyber coverage may fall short if an AI model resulted in a security failure or privacy breach. And as a result of the above prompt injection, businesses would ideally also want to have both first-party and third-party coverage that addresses direct financial losses for both investigation costs and liability as a result of the breach.

Businesses need adequate protection against AI risk

The evolution of AI necessitates that cyber insurance adapts rapidly to address potential gaps in coverage. What should businesses look for in their existing policies to stay protected against AI risk?

Find explicit language on new threats

Insurance traditionally moves slow. But given the prevalence of AI today, many businesses are rightfully searching for explicit coverage and pushing insurance providers to act. Simultaneously, exclusions are being drafted as a knee-jerk reaction to losses associated with AI.

Businesses should consider their specific risk profile, AI usage, and other security controls to determine coverage needs:

  • Do they heavily rely on third-party AI systems? 

  • Have they experienced business email compromise before?

  • Does their business have its own public-facing chatbot?

Depending on the answers to the above, direct policy language and how much coverage is provided can play an important role in deciding on the right risk mitigation options.

Implement security controls to reduce risk

  • Multi-factor authentication: By requiring a secondary authentication method to log in, businesses can add a secondary line of defense to prevent account compromise. In the era of AI-fueled attacks, FIDO-2, which uses biometric factors for authentication, is the gold standard when it comes to MFA.

  • Limit employee access: By assigning permissions based on role, businesses can reduce the potential impact of a compromised account following a phishing attack. Additionally, businesses should apply that same logic to LLMs. LLMs should only have access to data sources they need to perform necessary functions. 

  • Security awareness training: Security awareness training can empower employees to identify phishing attempts and help businesses avoid costly cyber attacks. In fact, at least one source found that 80% of businesses said employee education reduced phishing susceptibility. 

  • LLM proxy: Sending user data directly to a LLM without any safeguards can increase an organization’s risk for a data breach. An LLM proxy sits between a business’s application and the LLM provider (like OpenAI) and inspects each query to enforce security policies.

Prioritize hands-on cyber claims teams

If a business believes an employee may have clicked on a malicious link, speed matters. Yet, many businesses hesitate to report issues to their insurance provider in an attempt to investigate independently and avoid a claim. 

The bright side: Many claims teams want to help businesses avoid losses, too. 

If an employee fell for a deepfake video of the CEO requesting payment for an urgent project and sent $500,000 to a criminal-controlled bank account, it’s not too late to get the money back. Experienced cyber claims teams may be able to claw back the funds with the help of government agencies.

For example, in 2024, Coalition successfully put $31 million directly back in policyholders’ pockets through clawback efforts.

Cyber coverage built to address emerging risks

Given the current reality of digital risk, there has never been a greater need for forward-thinking cyber insurance. Coalition’s Active Cyber Policy addresses evolving digital threats with explicit and affirmative coverage:

  • Artificial Intelligence-Related Security Events: Including protection against deepfake-enabled fraud and AI-caused security failures.

  • SEC Cybersecurity Disclosure Requirements: Coverage for legal expenses related to materiality assessments and regulatory filings under new SEC rules.

  • Expanded Definition of Privacy Liability: Third-party privacy coverage includes violations of privacy law, extending protection beyond just violations of the policyholder's own privacy policy to address the risk of employees potentially sharing sensitive data with third-party LLMs.*

In addition to expanded protection, Coalition’s Active Cyber Policy offers advantages for security-conscious policyholders, like Vanishing Retention. By addressing new risks in policy language and rewarding policyholders for their quick action, Coalition is setting a new standard in cyber insurance.


INNOVATIVE COVERAGE. EXPANDED PROTECTION.

Meet the Next Generation of Active Insurance

Explore Coalition’s new Active Cyber Policy >


*Limitations and exclusions apply; all decisions regarding any insurance products, including approval for coverage, will be made solely by the insurer underwriting the insurance under the insurer’s then-current criteria. All insurance products are governed by the terms and conditions set forth in the applicable insurance policy. Please see a copy of your policy for the full terms and conditions. Any information on this advertising does not in any way alter, supplement, or amend the terms and conditions of the applicable insurance policy and is intended only as a brief summary of such insurance products. Policy obligations are the sole responsibility of the issuing insurance carrier.
Insurance products are offered in the U.S. by Coalition Insurance Solutions Inc., a licensed insurance producer and surplus lines broker, (Cal. license # 0L76155), acting on behalf of a number of unaffiliated insurance companies, and on an admitted basis through Coalition Insurance Company, a licensed insurance underwriter (NAIC # 29530). See license and disclaimers. Products or services may not be available in all countries and jurisdictions, and coverage is subject to underwriting requirements and actual policy language. Coalition is the marketing name for the global operations of affiliates of Coalition, Inc.
This blog post is designed to provide general information on the topic presented and is not intended to construe or the rendering of legal or other professional services of any kind. If legal or other professional advice is required, the services of a professional should be sought. The statements contained herein are not a proposal of insurance but are for informational purposes only. Insurance coverage is subject to and governed by the terms and conditions of the policy as issued. Coalition makes no representations regarding coverages, exclusions or limitations in any products offered on behalf of any insurer. Neither Coalition nor any of its employees make any warranty of any kind, express or implied, or assume any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, product or process disclosed. The blog post may include links to other third-party websites. These links are provided as a convenience only. Coalition does not endorse, have control over nor assumes responsibility or liability for the content, privacy policy or practices of any such third-party websites. Copyright © 2025. All rights reserved. Coalition and the Coalition logo are trademarks of Coalition, Inc.

Tags:

Active InsuranceCyber Threats

Related blog posts

See all articles
Cyber Insurance

Blog

Funds Transfer Fraud: 3 Steps to a Successful Clawback

Coalition has recovered a total of $101 million in stolen funds through clawbacks. What does the path to recovery look like?
Anne JuntunenJuly 10, 2025
Cyber Insurance
Cyber Insurance