VIPRE Security Group has released its Q2 2024 Email Threat Trends Report. The report highlights the prevalent threat of business email compromise (BEC) attacks, stating that 49% of detected spam emails are categorized as BEC emails, and 40% of BEC emails are generated by AI. The report suggests that some AI-generated BEC emails may have been entirely created via AI. 

Usman Choudhary, Chief Product and Technology Officer at VIPRE Security Group, offered the following commentary:

“According to the results of our recent study, BEC remains a major scourge. Nearly half of all detected spam emails are attributed to BEC scams, with the CEO, followed by HR and IT, being the most common targets. It takes on a more sinister complexion when a full 40% of the BEC emails uncovered were AI-generated, and in some instances, AI likely created the entire message. Entities without measures to detect these advanced threats could find themselves in hot water, facing double the risk compared to 12 months ago.

“How serious is this? The research identified 16.91 million malicious URLs, a 74% rise from the previous year. This surge highlights the growing use of advanced evasion techniques by attackers. So, as AI technology advances, the potential for BEC attacks grows exponentially. Malefactors are now leveraging sophisticated AI algorithms to craft compelling phishing emails, mimicking the tone and style of legitimate communications. The next wave of BEC attacks could see attackers using AI to dynamically analyze and exploit real-time information, creating tailored and contextually accurate scams nearly indistinguishable from genuine correspondence. Enterprises must stay ahead by adopting robust AI-driven defenses and continuously educating their workforce on emerging threats.”

Security leaders weigh in 

Stephen Kowski, Field CTO at SlashNext Email Security+:

“The rise of AI-generated Business Email Compromise (BEC) attacks, as reported by VIPRE Security Group, aligns with our observations of a significant increase in sophisticated phishing lures over the past year. To combat this evolving threat landscape, organizations must adopt advanced threat detection solutions that leverage AI and machine learning to identify and block these increasingly convincing AI-generated scams in real-time. Additionally, implementing continuous security awareness training for employees and utilizing multi-factor authentication can significantly enhance an organization’s resilience against BEC attacks in the AI age.”

Nicole Carignan, Vice President of Strategic Cyber AI, at Darktrace:

“Despite increased focus on email security, organizations and their employees continue to be plagued by successful phishing attempts. With the increasing use of generative AI by threat actors, our dependency on traditional threat intelligence or rules and signature-based defense systems will diminish as threat actors can now rapidly adopt and change signatures, hashes and indicators of compromise to evade defenses.

“As the sophistication of phishing attacks continue to grow, organizations cannot rely on employees to be the last line of defense against these attacks. Instead, organizations must use machine learning-powered tools that can understand how their employees interact with their inboxes and build a profile of what activity is normal for users, including their relationships, tone and sentiment, content, when and how they follow or share links, etc. Only then can they accurately recognize suspicious activity that may indicate an attack or business email compromise.

“The malicious use of AI poses an increasing threat as it lowers the barrier to entry for threat actors to be able to deploy highly sophisticated attacks with ease and at scale. We can expect offensive AI to continue to be used throughout the attack life cycle, including the use of natural language processing or large language models by threat actors to craft contextualized spear-phishing emails at scale. In addition to phishing, we have also seen evidence of threat actors using generative AI to write malware scripts. Threat actors are also exploiting vulnerabilities within LLMs themselves (adversarial machine learning), allowing threat actors to misuse these for malicious purposes, such as script generation and targeted content generation.”

Pyry Åvist, Co-Founder and CTO at Hoxhunt:

“We’ve seen a significant uptick in attacks that are likely from a new generation of blackhat AI phishing kits available on the dark web. The prices of these kits are dropping and they produce phishing emails with comparatively better localized text, graphics and landing pages than the older generation phishing kits. 

“Meanwhile, evasive dynamic phishing kits are becoming more popular. I’d say that the rise of Adversary in the Middle (AitM) attacks is the most concerning trend we’re seeing involving blackhat AI. AitM is the future of cybercrime. It’s extremely effective and much harder to trace and prevent than traditional social engineering attacks. Their evasive techniques are fundamentally different from traditional static phishing attacks because they will intercept legitimate user traffic and deploy malware and malicious content that adjusts on-the-fly to the user’s context, making it very hard to identify. The trusty old URL-hover technique won’t reveal the malicious intent.

“The technical barrier that’s kept AitM from being widespread on the dark web seems to diminish by the month. Once the barrier is low enough, and the ROI is high enough, we’ll see a wave of breaches from AitM-integrated credential harvesters, BECs, and ransomware.

“Security awareness and phishing training must keep pace with all of the latest threats so people to find their telltales, and report them to the SOC. We must basically take a ‘trust but verify’ approach to literally every online interaction we have.”