The past few years have seen explosive growth in automation technologies for cybersecurity professionals, from security orchestration automation and response (SOAR) security technologies to user and entity behavior analytics (UEBA). 


With Deloitte predicting that the market for cyber artificial intelligence (AI) tools will increase by $19 billion before 2026, many hail their arrival as the next major advance in cybersecurity.


It’s not hard to understand their enthusiasm: as an ever-increasing number of cyber threats combines with a shortage of IT talent across the board, today’s organizations need as much help as possible. Advances in AI have transformed many industries, and cybersecurity seems like the next logical step.


Unfortunately, history has proven that businesses tend to overestimate the transformative potential of automation in the short term while misunderstanding it in the long term. And if we take a closer look at the current state of automation in cybersecurity, we see it’s nowhere close to replacing human talent if anything, we probably depend on it too much already.


Over the next decade in cybersecurity, automation and expertise will go hand in hand, with humans at the helm of the ship. In the short term, there are better ways to deal with mounting cybersecurity challenges than throwing automation at the problem.


Automation and Complacency

Although cybersecurity automation is only in its nascent stages, many organizations are already relying on it, sometimes at the expense of their cyber preparedness and bottom line. A recent report found that cybersecurity incidents caused by network misconfigurations cost organizations 9% of their annual revenue, with less than 5% of respondents prioritizing cybersecurity for routers, switches and network edge devices.


The same report gives us some idea of why: 70% blamed inaccurate automation as a top challenge for meeting their security and compliance requirements. Trumped-up marketing has encouraged an attitude of complacency towards network perimeters — exactly where a human touch is needed most. 


But why is a human touch needed in the first place? With a baseline for normal behavior and training set for malicious activity, network security seems like the ideal use case for machine learning (ML) and automated rule-setting. Here are five simple reasons:


1. Inherent Limitations

Ultimately, automated solutions can only reliably respond to threats they have been trained to detect. But cyber actors are innovative and constantly developing new techniques to stay ahead of cyber defenders — automation tools lack the situational awareness to recognize a novel attack vector, leading to false positives and negatives.


Ultimately, even the best-automated solutions suffer from the same weakness that has made servers vulnerable to distributed denial of service attacks (DDoS): if attackers can’t find a way around them, they can simply overload them with junk data and requests.


2. “Automated” is not “Autonomous”

In the real world, few automated systems can function without an army of human experts to guide them. Algorithms do almost all stock trading today, but that has not eliminated traders or analysts from Wall Street — their jobs have simply changed, and some have been replaced by the computer engineers required to make it all work. Likewise, all existing automation technologies from SOAR and UEBA to Extended detection and response (XDR) — depend on humans to set rules and workflows while monitoring their behavior over time.


3. The Danger of Misconfiguration

Because automated systems depend on human configuration, they are only as good as the rules they are given. Misconfigurations can lead to disaster if a workflow with faulty parameters is automated, for instance, the system can generate thousands of wrong tickets. 


Poorly configured rules and detection thresholds can lead to overfitting (identifying legitimate activity as malicious) or underfitting (failing to detect real threats). Cyber experts will always have to be present to evaluate and adjust the performance of automated systems in real-time and in response to the changing threat landscape.


4. Automation Can’t Solve Social Engineering

According to Deloitte, up to 91% of all cyberattacks begin with a phishing email to an unsuspecting victim and email does not exhaust the phishing channels that cyber actors use today (social media, mobile apps, SMS). Unfortunately, automation can do little to prevent employees from succumbing to social engineering attacks that requires cyber training and education from experienced professionals.


5. The Impending AI Arms Race

Cybercrime is no longer dominated by lone wolfs or small hacking teams it is an organized, international operation in which trillions of dollars change hands. It intersects with geopolitical interests, corporate espionage and a teaming black market where cyber actors constantly work to develop better tactics, techniques and procedures (TTPs).


As businesses adopt AI to defend their networks, cyber actors will adopt the same AI to attack them more effectively. In the future, defending against global cybercrime will be a never-ending arms race where no team has a clear advantage unless it comes in the form of human expertise, creative thinking and the ability to adapt rapidly.


Augmentation Over Replacement

In the big picture, automation will inevitably play a role in cyber strategies of the future — but it will assist cyber operators rather than replace them. It will help eliminate repetitive and routine tasks it will reduce human error and augment threat analysis while reducing the rate of false positives.


But in both the long and short term, automation can’t replace cyber operators in a security operations center (SOC) nor can it provide cybersecurity experts to help organizations maintain resilience against threats to their bottom line. For that, businesses can look to managed service providers (MSPs) and outsourced IT positions like virtual chief information officers (vCIOs).