Social engineering is a defined domain within the social sciences that focuses on efforts to influence particular attitudes and social behaviors. In recent years, there has been a recognition that social engineering plays a huge part in the execution of cybersecurity attacks. Specifically, social engineering in a technical context can be defined as the act of exploiting human weaknesses to gain access to personal information and protected systems; it relies on manipulating individuals rather than hacking computer systems to penetrate a targeted system.
Traditionally, social engineering techniques have been categorized as either physical or non-physical (often termed “technical” where computer systems are used as the basis for attack). Physical manifestations of social engineering involve a physical act on the part of the criminal that grants access or steals information. Non-physical social engineering involves use of authority, playing on emotions such as greed, curiosity and anger as well as the use of impersonation. The intersection of “non-physical” and “technical” social engineering is where criminals are mostly focused today.
Hyper-Connectivity Provides Opportunity
Increasingly, workers today are hyper-connected, data-rich and often blur the lines between their public and private information. A person working from home, for example, most likely uses many of the same technical resources for their private conversations as they do for the public ones. Electronic communication such as email and, in particular, social media platforms further prepare the ground for sophisticated social engineering by cybercriminals.
Importantly, the definition of a trusted relationship has also changed significantly in recent years. Historically, a criminal leveraging social engineering techniques would have had to imitate a close relation or colleague in the physical world. Now, the spoofing of an email address or the creation of a fake social media account may be sufficient.
Even prior to the COVID-19 pandemic, people were physically meeting less and the tools that replaced these physical interactions were becoming more ubiquitous. In turn, these very same tools started to become almost perfect vectors for social engineering attacks. More and more of our data has to be online today in order for service providers, governments and others to make use of it and provide us with service. People have created digital avatars of themselves (for the purposes of engaging with social welfare or interacting with the banks online, for example) and these digital identities are proving to be just as valuable as physical human targets have been for centuries.
Deepfakes Present a Major Security Threat
Within the general trend of more advanced social engineering techniques, enabled by AI and machine learning technology, “deepfakes” represent a particular concern to enterprise security leaders as they attempt to predict what the next few years will bring. Deepfakes (essentially, fake video identities) leverage AI and machine learning to create “photo-realistic” simulations of certain individuals interacting with a video camera.
This technology has been steadily growing in sophistication for several years but the use of deepfakes in the cyberthreat domain has yet to materialize as a major source of concern. It is interesting to reflect, however, on the potential impact of this technology since COVID-19 and increased use of video conferencing services. Many security researchers are now predicting that deepfakes could become a major security threat in the 2021-2022 period.
We are likely to see both legitimate and illegal use of this technology in the coming years. Early adopters here are likely to appear in various parts of the entertainment industry. Legitimate consumer applications such as “faceswap” are demonstrations of early commercialized offerings that are helping to generate revenue with the technology and drive down the cost of application in the field. Criminal use of deepfake technology is yet to materialize fully but future targets could be political figures (particularly those who have a large online presence) and business leaders who could be targeted with ransomware or “business email compromise” attacks.
Audience Manipulation
In the 2021-2024 timeframe, deep fake videos will likely affect domains such as politics, the media and large businesses. Politicians are on camera frequently, often in stationary positions. This creates an opportunity for politically motivated groups to spread false messages, manipulate audiences and damage reputation through the use of this technology. In business, there is already evidence that criminals are researching how deepfake technology could be leveraged to manipulate unsuspecting employees in espionage or financially motivated attacks.
A particularly interesting area of innovation is in the technology of “mouth mapping,” invented by students and the faculty at the University of Washington, Seattle. Here, targets can be made to say anything in very realistic simulations that even the trained eye would find hard to distinguish from reality. This technology is likely to lead to viral political videos that insight fear, uncertainty and crime. It is also applicable to social media and web conferencing (particularly relevant in a post-pandemic era).
The effects of these types of attacks are likely to manifest themselves in a way that increasingly exaggerates the impact that we witness in cyber today. Manipulation of social media to spread fake news, false executive orders to request transfers of money and the breach and subsequent exfiltration of sensitive data are all risks that are likely to see materially increased impact in the coming years, thanks to deepfake technology.
The Role of Enterprise Security Leaders
There is little that risk managers can do to combat the development of deepfake technologies, but careful risk selection will become increasingly important as this and other offensive technologies evolve. Careful analysis of organizational cyber resilience and maturity will likely prove to be the difference between success and failure for insurers and cybersecurity leaders alike. In addition, defensive technologies are being developed to detect fake videos, and the adoption of these emerging cyber defenses will be key in combating the next generation of cybercrime.
As always, risk managers should try to take a balanced approach when identifying and selecting the right risks. There is no silver bullet of questioning that will translate into zero losses, however, companies can still try to understand how a given risk stacks up to information security frameworks. Taking the NIST Cybersecurity Framework for example, understanding how companies, identify, protect, detect, respond and recover will provide a more holistic view of the risk.
Technology can play its part but to holistically defend against the looming threat of deepfakes, it is important for companies to consider people, processes and technology. Companies are increasingly training their employees to identify and detect social engineering attacks, and build processes for alerting such incidents and escalating them before others fall victim.