IDC Spotlight Paper on the Critical Category of Data Mining and AI-Disrupted Incident Response
In 2025, artificial intelligence (AI) is not just a tool for innovation, it’s a weapon in the hands of cybercriminals. The fusion of AI with cyberattacks has ushered in a new era of threats, enabling threat actors to execute more sophisticated, scalable, and convincing attacks than ever before. These attacks don’t just look real; they move faster too. In 2024, the average dwell time for ransomware dropped to just 24 hours, down from over 60 hours in 2020.
AI has shifted the cyber threat landscape from opportunistic to precision engineered. No longer reliant on manual tactics or guesswork, threat actors now harness AI to automate attacks, manipulate human behavior, and craft campaigns that are faster, more believable, and harder to detect. The result? Impersonation attempts appear more real, breaches unfold in seconds, and attackers seem to know their victims personally. Here’s how they’re doing it:
Phishing Scams
AI enables the creation of highly convincing phishing emails and counterfeit websites, deceiving individuals into divulging personal information. By using this technology to analyze vast amounts of data on a target segment, including studying social media profiles or publicly available information, threat actors can tailor messages that closely mimic legitimate communications, making them harder to detect. A 2025 CrowdStrike report found that phishing emails created by AI have a 54% click-through rate, compared to 12% for human-written content.
Deepfakes
Criminals utilize smart technology to produce extremely realistic fake videos, audio, or images, impersonating figures like executives or public officials. These deepfakes can be used to commit fraud or disseminate misinformation. Last year, threat actors used AI-generated videos to impersonate the CFO and others during a video call at a multinational firm that led to an employee paying out $25 million to fraudsters.
Social Engineering
AI is transforming traditional social engineering into a hyper-personalized and highly effective attack vector. Where threat actors once relied on broad, generic messages, today they use AI to mine and synthesize publicly available information to craft messages that reflect specific interests, relationships, or even recent life events. These attacks often evade traditional security filters because the messages don’t contain malware or suspicious links; instead, they rely on psychological manipulation, urgency, and credibility.
These capabilities aren’t theoretical – they’re being used in real world attacks across industries.
Business Email Compromise (BEC)
AI can mimic an executive’s writing style to send fraudulent emails, instructing employees to transfer funds or disclose sensitive information. These AI-generated emails are often indistinguishable from genuine communications, increasing their success rate. The 2023 FBI Internet Crime Report shows a loss of $2.9 billion from BEC attacks during the calendar year, and that number will continue to increase as these attacks get more sophisticated.
Voice Cloning Scams
Using AI-driven voice synthesis, attackers can replicate a person’s voice, such as a CEO’s, to authorize fraudulent transactions or request sensitive data. In 2024, the CEO of WPP, a multinational advertising agency, was targeted by a deepfake scam involving an AI-generated voice clone. Fraudsters used a publicly available image of the CEO to create a deceptive WhatsApp account and set up a Microsoft Teams meeting, impersonating the executive and attempting to solicit money and personal details from employees. Luckily, the scam was ultimately unsuccessful.
AI-Enhanced Ransomware
Among the most alarming developments is the rise of AI ransomware, which uses machine learning to maximize damage before defenders can react. Algorithms can autonomously identify and encrypt critical files, prioritize high-value targets, and even negotiate ransom payments through AI-powered chatbots. For example, some ransomware variants now use AI to analyze a victim’s files and determine the most valuable data to encrypt, maximizing the likelihood of a ransom payment.
These advancements allow ransomware to spread more rapidly and efficiently, reducing the window for detection and response. As AI continues to evolve, it’s expected that ransomware will become even more autonomous and difficult to combat.
AI is transforming cybercrime into a fast-moving, highly scalable threat.
The speed with which it works is a critical factor. What once took hours of manual effort to craft phishing emails, scan for vulnerabilities, or impersonate executives can now be done in seconds. And the scale of these attacks is just as dangerous. A single threat actor can launch personalized attacks against thousands of targets at once, overwhelming even the most well-resourced security teams.
And then there’s believability. AI-generated content – whether it’s a deepfake video of a CEO or a voice-cloned ransom call – makes it harder than ever to tell what’s real. These attacks exploit trust, bypassing technical safeguards and manipulating human behavior.
Together, these three factors are redefining the threat landscape. Europol’s Executive Director Catherine De Bolle put it bluntly: “Cybercrime is evolving into a digital arms race targeting governments, businesses and individuals. AI-driven attacks are becoming more precise and devastating.”
Defending against AI-powered threats demands more than traditional cybersecurity. It requires scalable response systems that move just as fast, and just as intelligently, as the adversaries behind the keyboard.
Defending against AI-powered threats requires more than awareness – it demands speed, intelligence, and automation. Traditional tools are often too slow and siloed to handle today’s threats. Organizations need response systems that can process vast amounts of data, identify true exposure, and deliver actionable answers – fast.
At ACTFORE, we’ve built our platform to meet that challenge – because when AI is used to launch attacks, AI needs to be part of the response.
Like what you see? There’s more where that came from.
By submitting this form, I consent to being contacted by Actfore Inc. in accordance with the privacy policy. I will be able to opt-out at any time by unsubscribing.