View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. Cybersecurity
October 3, 2023

Will generative AI really supercharge phishing attacks?

Generative AI offers phishing gangs the chance to massively scale their operations. But we may already be equipped with the tools to beat them.

By Greg Noone

The best spearphishing emails are written with a devilish flair. Where other hackers sweatily exploit existing vulnerabilities on websites and APIs, your professional phisher adopts a more elegant approach. Theirs is a masterclass in psychological manipulation, a message laced with enough distracting language and details to indicate to the recipient that, yes, the sender’s link should be trusted and opened. Then, without the target even knowing, defences are bypassed, internal structures reconnoitred and ransomware discretely seeded. 

Fortunately, good spearphishing emails are rare, requiring a savoir-faire and intelligence about the target that most cybercriminals do not bother to cultivate. Instead, the majority employ a more scattergun approach, pumping out millions of emails in broken English that play on the baser impulses of the human psyche (“Oh yes, sir, the money will only be resting in your account.”) The arrival of generative AI, however, has allowed cybercriminal organisations to combine both scale and precision. By jailbreaking mainstream models like ChatGPT or using dark web variants like WormGPT, hackers are now capable not only of automating the care and attention required to write a devilishly good phishing email but also doing so at a scale hitherto impossible using traditional methods. 

Dark web large language AI models are alarmingly adept at this task. Cybersecurity consultant Daniel Kelley found this out for himself this summer when he tested WormGPT’s ability to produce emails that could be used in business email compromise (BEC) attacks. In response, the model produced eerily convincing text in urgently worded manager-ese. What’s more, it took just a single prompt. The release of such a platform on the dark web, wrote Kelley, effectively afforded unskilled cybercriminals the ability to write or hone phishing emails, “democratis[ing] the execution of sophisticated BEC attacks”. 

The extent to which LLMs have supercharged phishing campaigns remains unknown. What is clear is that phishing campaigns generally have increased, though by how much remains in dispute: according to research by Zscaler, such scams rose 50% last year, while one managed service provider indicated that the figure could be as high as 800%. That trend has continued throughout 2023. It’s one that Sage Wohns believes is logically connected to the proliferation of generative AI tools among cybercriminals.

“The crazy thing is that phishing is still the cause of 80% of data breaches,” says the founder and chief executive of Jericho Security, a start-up that consults on AI-generated cybersecurity threats. As such, argues Wohns, the emergence of LLM-fueled cybercrime constitutes nothing less than corporate cybersecurity’s “nuclear moment.” 

A rendering of a hook piercing a laptop screen and, in front of that, an email icon. The image is a metaphor for phishing.
Many cybersecurity experts fear that LLMs will massively increase the scale and sophistication of phishing scams. Others, however, believe companies have the tools to hand to thwart their erstwhile attackers. (Photo by JLStock/Shutterstock)

Phishing trips

How, then, does one defend against a super-convincing AI-generated phishing email? As it turns out, any defence is much the same as that you would employ against a message written by a very professional, very debonair human hacker. 

“If I focus on an AI-generated image next to a non-AI generated image and I’m asked, ‘Which one do you think is real,’ and they are basically indistinguishable, I’m going to be in trouble,” says generative AI expert Henry Ajder. But, he adds, that isn’t the situation in which most employees will encounter a phishing email. Instead, any message they receive will be laden with clues indicating its fraudulent intent: a sense of unfounded urgency in the phrasing that pushes the recipient to click on a link or trigger a funds transfer, for example, or simply just the sender’s address. 

In that sense, while AI might increase the volume of well-written phishing emails, there’ll be nothing inside these messages that can’t be spotted by an employee after a rigorous cybersecurity awareness course. “It just means that we need to go back to the basics of digital hygiene security,” says Ajder. “Companies probably need to just be making sure that their employees are even more vigilant and even more aware of phishing attacks in general, regardless of whether they’re AI-generated or not.”

Content from our partners
How distributors can leverage digital tools for successful customer experience
How Midsona accelerated efficiency and reduced costs with a modern ERP system
Streamlining your business with hybrid cloud

The fact that an AI wrote a phishing email should not be completely discounted, argues Wohns. Some dark web LLMs are capable of surfing the internet and including choice details in phishing emails about their targets’ lives, as one Wall Street Journal reporter discovered when they received a message mentioning their musical side hustle. As such, “it can be educational to see what the machine is using to attack you because that’s an indicator of the data [it’s using]”, says Wohns – data that can then be removed from any social media site it was extracted from. 

Textual analysis software can be used to confirm any suspicions that an email is written by an AI, explains Dr Vasileios Karagiannopoulos, the co-director of the Centre for Cybercrime and Economic Crime at the University of Portsmouth. “We use it in academia [when] we want to see if students have written their essays with AI assistance,” says Karagiannopoulos. Things get a bit more complicated if employees are using models like ChatGPT to routinely hone normal emails. Then, explains Karagiannopoulos, a message innocently buffed up to sound more sophisticated could result in an unnecessary panic for your company’s IT department. 

Vishing, too, is a cyberattack expected to be supercharged by AI. Since the first documented cases in 2019, audio deepfakes have become much more sophisticated and accessible, with platforms like Spotify and Microsoft announcing tools that almost perfectly match the intonation and voice patterns of their subjects. Some of these models require mere seconds of audio to create a convincing facsimile, all of which can be found in abundance across social media. 

Here, again, increasing training for staff to recognise the contextual clues that a fraudulent call is taking place seems to be the best remedy for any future onslaught of vishing attacks. Another should be to tighten the security framework around fund transfers generally, argues Ajder. “One thing to consider is, if it’s a voice call, maybe using a keyphrase that essentially authenticates who you are,” he says.

Detection mechanisms to detect audio deepfakes on calls, however, remain almost non-existent. “There’s not as much budget for that, frankly,” says Wohns. “It’s theoretically easier to detect [than phishing] because of the cadence of the voice, and things like that. But building a product to be able to detect that is actually something that I haven’t seen CISOs jump at yet.” 

They should, argues Wohns. The recent cyberattack perpetrated against MGM Resorts, which shut down its slot machines and hotel check-in systems, is thought to have been triggered through a vishing attack. AI will dramatically increase these types of breaches, explains Wohns, not least against IT helpdesks used to doling out passwords and phone numbers to new employees. Once an attacker succeeds in subverting the onboarding process in this way, he argues, it’s game over for the target corporation. “All of a sudden, then, you have access to someone’s email,” says Wohns. “And you can shut down the entire company.”

The MGM Grand hotel, Las Vegas.
The MGM Grand Hotel, Las Vegas. Its operator, MGM Resorts, was crippled by a cyberattack last month, which some experts have said originated with a vishing attack. Some fear that these attacks may rise in number as cybercrime gangs use audio deepfake technology to increase the scale of their operations. (Photo by OLOS/Shutterstock)

Theory and reality

But how real is the threat of AI-generated phishing campaigns right now? Some cybersecurity vendors and agencies do claim to have encountered such messages. But for his part, Kelley hasn’t seen any proof that hackers are actively using tools like WormGPT to target corporations.

“There’s evidence that cybercriminals are using LLMs in cybercrime, but no evidence that they’re using them to write, hone, or translate phishing emails,” Kelley told Tech Monitor. “Although this doesn’t necessarily mean that they’re not doing it, it just means that there’s no evidence available in the open to suggest this.”

Ajder, too, hasn’t seen any proof that LLMs are being used for this purpose – yet. Indeed, the public debate about AI-powered phishing today reminds him of the discussion surrounding deepfakes in 2019. “I remember back then, a lot of companies claiming that they were seeing loads of deepfake attacks against people using audiovisual content,” recalls Ajder. “But they would never post any evidence.” 

Since then, multiple vishing attacks using audio deepfakes have been documented, though only a “handful” have been definitely linked to AI. This leads experts like Karagiannopoulos to suspect that CIOs may have a bit more time on their side to prepare for AI-based threats than vendors imply. “I would say it’s still a little bit early for us to be worrying about this,” he says.

It would nevertheless be prudent for CIOs to keep a close eye on AI-fuelled cybercrime trends, according to Kelley. “It’s only a matter of time before this becomes a widespread problem,” he says. It hardly stretches credulity, for example, to imagine hackers tasking LLMs with recreating an entire email chain of fake emails or instructing the model to respond to enquiries from the target in real time. “It could answer questions, provide follow-up information and respond to the conversation’s flow, all while maintaining the deceptive masquerade.”

Kelley says that he tends to avoid discussions about AI and cybercrime, mainly because there are so many accusations on the dark web about certain tools being scams, or generative AI itself being overhyped. “At one point, I was a critic of the technology myself,” he recalls. “But after using some of the available tools, it’s hard to criticise them.”

Read more: UK Police are being overwhelmed by cybercrime

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU