Lately it seems conversations about artificial intelligence (AI) are everywhere. There are constant discussions on the potential for popular AI chatbot ChatGPT, developed by OpenAI, to take over jobs ranging from media to analysts to the tech industry, and maybe even malicious phishing attacks.
But can AI really replace humans? That’s what recent research from Hoxhunt, a cybersecurity behavior change software company, hoped to explore by analyzing the effectiveness of ChatGPT-generated phishing attacks.
The study analyzed more than 53,000 email users and compared the win-rate on simulated phishing attacks created by human social engineers and those created by AI large language models. According to the research, while the potential for ChatGPT to be utilized for malicious phishing activity is no match for human social engineers who outperformed ChatGPT by around 45%.
According to the report, four simulation pairs — four human and four AI — were sent to 53,127 email users in more than 100 countries in the Hoxhunt network. The phishing simulations arrived in users’ inboxes the same as any legitimate or malicious email.
The study revealed that professional red teamers induced a 4.2% click rate, versus a 2.9% click rate by ChatGPT. According to the research, people were still better at deceiving other people, outperforming AI by 69%.
An important takeaway from the research is the effect training had on the likelihood of falling for a phishing attack. It showed users with more experience in a security awareness and behavior change program exhibited significant protection against phishing attacks by both human and AI-generated emails with failure rates dropping from more than 14% with less trained users to between 2% and 4% with experienced users.
“Good security awareness, phishing and behavior change training works,” said Pyry Åvist, Co-Founder and CTO of Hoxhunt. “Having training in place that is dynamic enough to keep pace with the constantly-changing attack landscape will continue to protect against data breaches. Users who are actively engaged in training are less likely to click on a simulated phish regardless of its human or robotic origins.”
In the end, this new research shows that AI can be used to both educate and to attack humans and could create more opportunities for both the attacker and the defender.
“The human layer is by far the highest attack surface and the greatest source of data breaches, with at least 82% of beaches involving the human element,” the report states. “While large language model-augmented phishing attacks do not yet perform as well as human social engineering, that gap will likely close and AI is already being used by attackers. It’s imperative for security awareness and behavior change training to be dynamic with the evolving threat landscape in order to keep people and organizations safe from attacks.”