Generative AI is being adopted across enterprises, and the widespread use of AI tools has created significant and unique security threats to businesses.
In a recent webinar, Cyderes’ offensive security experts Evans Mehew, John Ayers, Robert Dickey, David Webster, and Matticus Hunt discussed the issues posed by AI. They highlighted how organizations can prepare and protect themselves.
The formula to combat these new threats is threefold: heightened risk detection, updated training, and improved policies and security controls.
AI Is Making Life Easier for Attackers
Since OpenAI’s release of ChatGPT last November, Cyderes’ offensive security team has seen dramatic changes in the threat landscape. John Ayers, Vice President of Offensive Security, summed up the difference in one word: “weaponization.” He said, “We’ve seen two attacks using AI in the last ten days.”
The biggest issue surrounding generative AI, according to Ayers, is: “We still don’t have a clear line of how do we manage it, how do we control it?”
Rob Dickey, Lead Red Team Operator, warned that “using these AI tools with large language frameworks on the backend will increase the ability for Script Kiddies to run different attacks that they otherwise wouldn’t be able to.”
Bad actors are using AI to create, for example, more sophisticated phishing emails (featuring better pretext) and through shadow IT.
The bottom line is that less skilled hackers can run ransomware campaigns more efficiently. Matticus Hunt, Offensive Security Consultant, said it best, “Script Kiddies are getting more and more effective with less effort, and the path to sophistication for threat actors has gotten shorter.”
The Changing Landscape of Risk
One of the most critical issues for businesses is that it is becoming increasingly difficult to detect attacks when AI is used. Ayers asked, “How do we look for this, where do we go with that, and how do we empower the SOC?”
What keeps security operation centers (SOCs) up at night is wondering where all the data used by AI is going and how to protect it.
AI opens tremendous privacy concerns as well. People are experimenting with it, for example, using audio cloning to fool people and breach bank accounts.
Dickey explained how this problem will manifest: “Pretext for phishing emails will become more believable.” Malicious emails won’t have misspelled words or broken English, and victims may be more apt to respond.
Hackers are using public AI tools to build entire malicious frameworks, making solo hackers more of a threat than ever before.
The face of blackmail may change with the use of voice cloning. Spoofed recordings may be used to manipulate people and extort money. Companies will spend more on ransomware and blackmail, and they must prepare for these attacks and learn how to protect identities, voices, and recordings.
What Needs to Change to Protect Businesses
Businesses need to shift their focus to be prepared for these new types of threats. Previous approaches to cybersecurity will need to be updated to match new attack methods. David Webster, Offensive Security Team Lead, put it perfectly, “Now more than ever, end-user awareness is super important.”
Heightened risk detection will be handled by AI trained to look for new indicators, such as better phishing pretexts (more believable emails), no broken English, different domains, SPF settings, and typosquatting. It will be AI versus AI with offensive security techniques looking for calls to action that request financial information or passwords and contain a sense of urgency.
Another important focus will be updated training company wide. The historical tactics won’t work anymore; the end user must be more aware of what to look for so they can detect phishing emails and malicious attacks by tracing the email’s origin, researching whether it is an external email, or examining the messaging.
Organizations must reevaluate their policies and security controls and completely revamp them to respond to these new threats. The big question is, “You want to empower employees to use AI, but how do you keep it safe?”
It’s essential to work with offensive security providers who have their finger on the pulse of these rapid changes and can help keep your business safe.
What Does the Future of Cybersecurity and AI Look Like?
More industries will be affected by AI’s role in cyberattacks, including healthcare, fintech, education, and manufacturing. Threats to critical infrastructure are a top concern.
More companies will adopt a trust but verify policy company wide with more segmentation and validation. The answer may be shifting to older security authentication like PKI (public key).
Forensics will play a big part in threat detection and prevention, along with industry-specific protections.
AI-Centric Risk Management with Cyderes Offensive Cybersecurity
Defending against AI attacks will be a continuous, ongoing battle. There’s no way around it. The best thing organizations can do is focus on preparedness. Because of this reality, our offensive security team is helping organizations not just detect attacks, but prepare for them with an emphasis on AI-centric security.
At Cyderes, we look at AI-centric threats and drill down by industry and threat actors to see how they are leveraging AI technology. From there, we offer risk intelligence and management and continuous monitoring aligned with the threats to a given vertical and specific company. In this rapidly evolving AI threat landscape “forewarned is good, but forearmed is better,” Ayers explained in an article on AI-centric security.
Interested in learning more about an AI-centric approach to security?
Watch the Replay of Offensive Security in the Era of AI
Watch a replay of this fascinating discussion with our offensive security experts.