Security alerts are imperative for effectively mitigating and preventing cyberattacks. But, a key challenge of modern threat protection solutions is the sheer number of alerts they generate – leading to “alert fatigue.”
To learn more about the dangers of alert fatigue, we talk to Mark Kedgley, CTO at New Net Technologies (NNT), a Naples, Fla.-based provider of IT security and compliance software.
Security Magazine: What is your title and background?
Kedgley: I’m the CTO at New Net Technologies (NNT), part of the NNT software team, providing complete solutions for server, network and PC change and configuration management. In this role, I’m responsible for driving ongoing product development. My primary objective is to continually push NNT’s data security and compliance solutions to protect our customers’ sensitive data against security threats and network breaches in the most efficient and cost effective manner, while being easier to use than anything else available in the market. I’ve been the CTO at NNT since 2009, and have more than 20 years’ experience in IT business development and sales.
Security Magazine: Why is alert fatigue dangerous?
Kedgley: Alert fatigue is simply the numbing effect that results from a security analyst being overloaded with security alerts, especially when there are too many false alarms. Past research showed that more than about 30 security incidents a day per analyst was too many to properly investigate, which is when fatigue sets in and corners are cut or alerts simply ignored. It’s a headache that too many cybersecurity vendors are actually guilty of helping to create. It comes about as a side-effect of the features-race, especially in the SIEM market, and trying to automate the identification of security breach activity.
Unfortunately far too many of these Threat Signature technologies just aren’t smart enough to deliver valuable intelligence leading to false positives that serve to mask genuine security incidents. Increasingly security professionals are looking to simplify their security strategy, seeking to master fundamental security controls instead of being distracted by the latest silver bullet product. As a case in point, using intelligent change control as a more reliable breach detection technology not only cuts out the unwanted change noise from business as usual activities, but provides more meaningful context to changes than simple log data is able to.
Security Magazine: Have you observed this problem worsening during the pandemic?
Kedgley: Everything has been put under more pressure than usual during the pandemic, with staff and regular processes being stretched by the extra workloads required during the last few months. Inevitably this will mean that the resource available to handle alerts will be at a greater premium than usual and alert fatigue has been more acute.
Security Magazine: How can IT/security teams minimize alert fatigue? Can you suggest three approaches?
Kedgley: The first thing is prioritization. It’s better to properly investigate the most serious security incidents than to try and spread resource too thinly. This may mean reducing the scope of monitoring or better still, improving the intelligence of security monitoring to cut out the false alarms. When we talk about the security best practice of change control, reducing change noise is the first place we start. Change noise is generated by regular, expected changes such as patching which can often drown out and mask the changes you are really interested in, which are the unexpected and suspicious changes. These changes include any indicators of compromise and breach activity.
Second step is enrichment. By giving alerts more context there is more opportunity to apply filters and rule logic. For example, typically an automated patch deployment system will use a dedicated service account to administer changes. By including ‘Who Made the Change’ for any alerts, it’s easy to then filter out any changes applied by the authorized deployment system. Equally file changes can be analyzed using a file reputation whitelist so that unknown files with no good reputation can be investigated and remediated before changes that may be unwanted, but at least are known not to be zero day malware.
Third step is correlation. This is where security controls are combined to literally provide a more joined up security posture.
Traditional change management used in isolation is flawed – it’s easily bypassed, the level of scrutiny on the implemented changes are minimal. By contrast, the security best practice of change control combines change management principles with security-grade tracking, analysis and verification of changes. Verification of changes is a game-changer and this can be delivered through integration of ITSM Ops and Security processes – SecureOps.
By leveraging knowledge of change windows and even using a pre-defined change manifest that itemizes expected changes, changes can be validated as safe and approved, and more importantly, unplanned, unexpected changes can be isolated as potential security incidents.
Security Magazine: How important is it for modern cloud threat protection solutions to not add to this noise?
Kedgley: Even more important! The reality is that the cloud and container world adds a whole new set of configuration datasets to control and secure. In order to get the advantages of this new computing model we inevitably end up with even more moving parts to secure and in turn greater potential for misuse or abuse of functionality. Micro services and container-delivered services are more dynamic, and with higher change-velocity than regular virtualized and physical platforms.
The consequence is even more sources of change and alert noise. This is against the backdrop of the recent ‘IBM – Cost of a Data Breach Report’ which concluded that “misconfigured cloud servers tied for the most frequent initial threat vector in breaches caused by malicious attacks. Breaches due to cloud misconfigurations resulted in the average cost of a breach increasing by more than half a million dollars to $4.41 million.