I’ve been spending a lot of time talking to CISOs about Cyderes’ advancements in agentic AI and how it powers our security operations platform. There is excitement about using AI to improve threat detection and response. However, the discussion often turns to a critical concern: how AI is changing the nature of cyberattacks.
This question reminded me of the late 1990s, when the Internet was emerging, and e-commerce brought new digital risks.
One of my favorite security thought leaders, Bruce Schneier, addressed this in his 1998 Crypto-Gram blog. He also covered it in his book, Secrets & Lies: Digital Security in a Networked World. His key insight still holds true today: cybercrime doesn’t fundamentally change; it evolves with technology.
E-commerce crime wasn’t new; it was just an advancement of centuries old crimes refreshed with new tech. As I observe the emergence of AI-driven cyberattacks and how to defend against them, it’s clear this trend is (so far) holding true in the AI-powered world.
How the internet transformed cybercrime
The rise of the Internet marked a turning point in the evolution of cybercrime. It didn’t just create new vulnerabilities; it redefined how crime could happen in the digital age. Schneier observed that the Internet fundamentally changed the nature of crime in three ways that still shape today’s threat landscape.
1. Automation
The Internet enabled cyberattacks to scale like never seen before. Instead of targeting one person at a time via phone, attackers could automate campaigns to hit millions of targets simultaneously. This shift dramatically increased the efficiency, reach, and profitability of cybercrime, laying the groundwork for modern social engineering attacks we see today.
2. Action at a distance
While phones allowed crimes to be committed at a distance, long distance charges and language barriers limited their reach. The internet eliminated those constraints, enabling attacks to be launched instantly across the globe. This shift not only expanded the pool of potential targets but also created major challenges for cyber law enforcement, as attacks now routinely cross international jurisdictions, complicating investigation, prosecution, and cooperation between agencies.
3. Technique propagation
The Internet revolutionized the way malware, exploits, and hacking techniques are shared. In the physical world, distributing tools or hardware requires effort and risk. Online, copying digital payloads takes seconds, and attackers can easily distribute malicious code, pirated content, and exploit kits to global audiences.
While these three fundamental shifts altered how we detect, defend, and prosecute crimes, they didn’t alter the core motivation of threat actors (money, power, politics, revenge). What has changed is the scale, risk, speed, and impact of these attacks.
AI is evolutionary, not revolutionary
To date, AI has not made significant improvements on automation. In fact, it simply leverages existing, inherent automation capabilities of the Internet and modern computer systems for automating attacks.
Agentic models can be environmentally aware and are great at generating new examples of structured languages, including code, based on patterns and training data. Underneath agentic AI models are the same infrastructure and capabilities available to human attackers.
Similarly, AI hasn’t significantly advanced action at a distance. What it has done is improve the effectiveness of remote attacks, making them more targeted, faster, and harder to detect. AI does change the game in the following ways:
AI accelerates attack chains
Automation improved the throughput of an attack. Scripts can run 24x7, scaling an attack to millions of potential targets simply by executing a script. This allowed attackers to play a game of percentages, understanding even less than a 1% success rate would yield thousands of successful attacks. It made DDoS attacks possible in a way that phone line flooding never could.
In the world of AI, it’s no longer about throughput; it’s about latency. AI, especially agentic models and specialized language models, reduces latency across every stage of the attacker lifecycle. From discovery to execution, each phase becomes faster and more adaptive because agents can autonomously make decisions in real-time, eliminating delays in human decision-making.
- Vulnerabilities and exploits are identified more quickly
- Zero-days can be created at lighting speed
- Targets can be discovered and operationalized in real-time
- Decisions at every step of the attack from execution to impact can be automated and customized for the environment, decreasing dwell times.
This means everything we thought we knew about the time required for an attacker to operate will be wrong and, as an industry, we will have to learn to operate faster to stay head of ai-powered cyberattacks.
Sci-fi level impersonation with phishing and deep fakes
In the past, language barriers, poor grammar, and awkward phrasing used to be a significant hurdle to successful phishing attacks. Today, LLMs help attackers generate emails that feature accurate spelling, grammar, tone, mood, style, and written in any language they choose. Ironically, these techniques help attackers avoid ML-based detection models trained to detect phishing emails based on linguistic characteristics.
The big game-changer will be deepfake voice, image, and video content. From impersonating public officials, business leaders, and family members, to generating fake intimate videos for use in extortion and child exploitation schemes, this technology will fundamentally change how humans interact with each other.
It’s bad enough we must be wary of every phone call we receive due to rampant phone fraud schemes. It will be much worse when attackers perfect simulated real-time video indistinguishable from the real person. Today, it’s the stuff of sci-fi novels (I’m looking at you, Neal Stephenson), but it’s coming over the horizon at breakneck pace.
Even better technique propagation: AI lowers the barrier to entry
In Secrets & Lies, Schneier discusses how the software and data-driven Internet enable techniques used in crime (malware, exploits, login credentials, and banking information) to be propagated trivially. Over the past 30 years, we've seen just how accurate that prediction was. The only real obstacle for aspiring attackers? Technical skill.
Agentic models make this problem worse. Rather than sharing technical information and code, attackers will share domain-specific language models (DSLMs), prompts, and strategies for creating context windows and memory fragments. The DSLMs will encode the knowledge of experienced attackers into interactive software that can generate exploits, create malware, and execute living-off-the-land commands. With a set of natural language prompts in the language of their choice, aspiring cybercriminals can effectively will an attack into existence just like vibe coders can will software into existence today. Instead of propagating techniques, attackers will now be able to propagate expertise.
Innovative tech, incremental impact
Even though attackers are innovating technology at record pace, the impact of those attacks is incremental compared to the enormous leap achieved in the late 1990s and early 2000s.
Table 1 outlines the evolution of crime from the days when the telephone dominated global communications to the current state of the art enabled by AI. While it’s clear that AI is having a significant impact on the success ratio of fraud schemes and on the speed of automated attacks at scale, the overall impact of attacks is limited outside of a few use cases.
Table 1
| Crime | Telephone | Internet | AI |
|
Theft |
Credit and banking scams |
Phishing, credential stuffing |
Synthetic identities, deepfakes |
|
Racketeering |
Bookmaking, extortion, boiler rooms |
Dark-web crime rings, ransomware affiliates |
Coordination of complex attacks with AI agents |
|
Vandalism |
Prank calls jamming phone lines |
Defacements, destructive Malware |
Automated destructive bots, dynamically adapting malware |
|
Voyeurism |
Wiretapping, illegal recording |
Webcam hacking, spyware, wiretapping |
Deepfake intimate images and videos |
|
Exploitation |
Telephone solicitation and Grooming |
Online grooming and exploitation via social media |
Conversational agents, deepfake intimate images and videos |
|
Extortion |
Blackmail |
Ransomware |
Deepfake compromising content |
|
Con Games |
Fake charities, lottery scams |
Crypto, romance, and 419 scams |
AI personalization, deepfake identities |
|
Fraud |
Bank or Government official impersonation |
BEC, fake e-commerce sites, phishing |
Real-time video impersonation, synthetic corporate communications |
As outlined above, advanced persistent threats (APTs) and sophisticated attacks will get faster due to AI’s ability to shrink the decision feedback loop at nearly every stage of the attacker lifecycle. For organizations defending complex enterprise networks, this acceleration is a serious concern. For the average user or small business, many AI-driven cyber threats are still limited by inherent limitations of the tech:
Deepfakes are difficult to scale
While it’s true you can create scalable attacks with deepfakes by impersonating public figures, they require a significant amount of information on the intended targets. While highly targeted attacks are likely to be more successful with deepfakes, scalable fraud schemes with this technology will be difficult for the foreseeable future.
Uncanny valley still exists for now
Although AI-generated images are becoming increasingly difficult to identity from real photos, AI voice and video is still trivial to spot, making these tactics less effective, especially when deployed at scale. Advancements such as Sora 2 and Nano Banana Pro have significantly shrunk the uncanny valley, but it’s still there for now.
Homogenous AI attacks are easier to detect
Similar to ransomware-as-a-service (RaaS), attacks that look similar are easier to detect at scale. Current approaches are simply using AI to automate existing TTPs. As such, they are prone to detection by traditional methods.
Staying ahead of evolving AI threats
Don’t panic… yet. To steal another thought from Schneier, “attacks always get better.” AI is evolving faster than any technology shift I have witnessed in my 30-year career. As a result, I have been wrong about almost every prediction I have made about generative AI to date.
So, while I feel it's important to understand these changes in the context of history, I can certainly relate to the growing concern of many security leaders about the rapidly evolving attacker landscape due to innovations in generative AI. Given the current state of AI in use by attackers today, there is no reason to panic.
The motivations of attackers remain the same, as does the mission. AI is just making their current schemes a bit more effective. That doesn’t mean we shouldn’t remain vigilant, with an eye on the horizon, as the current outlook could be different when we wake up tomorrow.
See what Howler Cell is uncovering next
View Howler Cell’s newest findings, spotlighting emerging threats, active campaigns, malware analysis and the shifts in todays attack landscape.
High-cost technology and low-priority service inhibit growth
Over the years, the law firm faced three challenges:
1. Indifferent service
Previous managed security providers didn’t operate at speed or provide sufficient guidance on maximizing existing technology defence investments. This left the firm to continuously tune and configure defenses rather than focusing on strategic improvements which impacted team morale.
Be everyday ready
Optional featured resource text
Optional subhead or body text here can be multiple lines orem ipsum dolor sit amet, consectetur loremset adipiscing elit.
Ready to close your security gaps?
To stay ahead of today’s relentless threatscape, you’ve got to close the gap between security strategy and execution. Cyderes helps you act fast, stay focused, and move your business forward.