Discover how AI is changing the cybersecurity landscape

 Artificial Intelligence is rapidly reshaping the cybersecurity threat landscape, creating both new challenges and opportunities for large enterprises. As malicious actors embrace AI tools and large language models (LLMs), cyberattacks are becoming faster, smarter, and harder to detect. 

This article explores how cybercriminals are using artificial intelligence to launch faster, more convincing, and harder-to-detect attacks—from deepfake scams to AI-generated phishing and malware. It also highlights why proactive defence strategies, including AI-powered security tools and employee training, are more critical than ever.

Read on to discover how speaking with your insurance provider before an incident occurs can make all the difference—and how your cyber insurance policy can support you when it matters most.

AI-powered threats: The new normal

In 2025, cybersecurity is no longer a matter of patching known vulnerabilities—it is about anticipating and countering adaptive, AI-driven threats. Threat actors now use generative AI to craft convincing phishing emails, automate voice-based scams (vishing), and send malicious SMS messages at scale. These techniques not only increase the frequency of attacks but also significantly improve their success rate.

Social engineering is now more precise and more scalable. An adversary with access to AI tools can simulate legitimate communication with astonishing accuracy—imitating voices, mimicking writing styles, and even generating deepfake videos in social engineering efforts.

This evolution demands a shift in how organisations defend their digital environments—traditional methods are no longer enough. Enterprises must invest in AI-driven defence mechanisms to counter these threats effectively.

Ransomware: Still a persistent and evolving threat

Despite the buzz around AI, ransomware remains a cornerstone of enterprise risk in 2025. What has changed is the level of sophistication in how these attacks are executed. Attackers are using sophisticated extortion tactics—encrypting data, stealing it, and then threatening public disclosure, for example, if ransom demands are not met. Large enterprises should make ransomware preparedness a priority. This includes conducting regular system backups, developing, and rehearsing incident response plans, as well as prioritising employee training to mitigate the impact of an attack.

Peter Granlund, CISO at If
Peter Granlund, CISO at If

Cybercriminals are leveraging AI to conduct more sophisticated attacks

Large Language Models (LLMs) enable anyone in the world to write personalised and well-written phishing emails, in almost any language. This is used to increase their credibility and make them more likely to deceive recipients. Protecting against AI-driven phishing will likely become better by using AI for defence filtering, however, the responsibility for safeguarding against this threat ultimately rests with us, the human workforce.

"In a recent phishing simulation by cybersecurity company Arsen, they used AI to craft emails and managed to trick 35% of the recipients", explains CISO Peter Granlund at If. 

Another area is writing malicious code. Large Language Models (LLMs) enable malware writers to create new malware, or transfer existing malware, to any exotic programming language – thereby avoiding detection, since most malware is written in C/C++ and is compiled with Microsoft’s compiler. So, by wrapping malware code in an exotic language, the code can bypass signature-based detection.

"You may be surprised to learn that the AV-TEST institute registers approximately 450,000 new malware samples every day! This demonstrates that the malware writing industry is evolving into an underground industry", Granlund comments.

The recently released GPT 4.1, is especially good at coding, so with this continued rapid development we can expect cybercriminals to leverage new AI capabilities for malware creation in obscure or exotic programming languages to avoid detection.


I anticipate that cybercriminals will evolve their use of AI-bots to penetrate organisational defences much faster - within a few minutes - to steal data, or for national security purposes.

Peter Granlund, CISO at If

AI-driven interactive attacks will grow in coming years

In Australia, research commissioned by MasterCard indicates that one in eight businesses in the country fell victim to deepfake scams in 2024. The research found that 20% of Australian businesses have received deepfake threats in the past 12 months, and of these, 12% fell for the manipulated content.

"With the rapid evolution of Large Language Models, I expect that we will see more interactive AI attacks, where AI without human intervention interacts with messages, voice and video in a way that will make it increasingly difficult for us humans to determine whether it is actually the intended person", Granlund says. 

He expects that within the next few years cyber defence software and services will have adopted AI-capabilities that enable organisations to act more predictively, quickly, automatically, with high precision, and with a low rate of false-positives to block cyberattacks at the early stages of the attack. Historically, this has required significant resources and customisation, so in this area, AI can be a real game-changer.

Ghita Meyer, Head of Liability and Cyber Underwriting
Ghita Meyer, Head of Liability and Cyber Underwriting

Important advice on how to protect your organisations

As AI becomes commoditized, the gap between offense and defence is narrowing. Malicious actors are rapidly integrating AI into their attack workflows, but many organisations lag in deploying AI-based defence mechanisms. Forward-looking organizations are already adopting AI-driven threat detection, behavioural analytics, and anomaly detection systems to stay ahead. 2025 will test the resilience of every large enterprise's cybersecurity posture. The organisations that thrive will be those that shift from reactive to anticipatory defence models, treating cybersecurity as a dynamic, systemic business risk—not just an IT problem.

A wave of new legislation in the EU—including the Cyber Resilience Act and the Artificial Intelligence Act—is set to redefine compliance and risk expectations for large enterprises.

"These developments are positive from a risk governance perspective, as they elevate awareness and understanding of emerging cyber and AI risks", says Ghita Meyer, Head of Lability and Cyber Underwriting.

"However, the real challenge lies in staying two steps ahead of an increasingly complex and fast-moving threat landscape. Regulatory compliance is just the starting point—true resilience requires strategic foresight", she continues.

Mayer recommends seeking a consultation with the insurance company to ensure a comprehensive understanding of the cyber insurance policy.

"This will give you, as a CISO, strong insight into how your insurance will support you and how you can use it best. If you have your insurance with If, you can reach out to your contact person and request a call with a cyber underwriter", she concludes.



Written by

Laura Hyytiäinen, Client Engagement Specialist