CardinalOps recently released the Third Annual Report on the State of SIEM Detection Risk which analyzes real-world data from production SIEMs — including Splunk, Microsoft Sentinel, IBM QRadar and Sumo Logic — covering more than 4,000 detection rules, nearly one million log sources and hundreds of unique log source types.
The data spans diverse industry verticals including banking and financial services, insurance, manufacturing, energy, media & telecommunications, professional & legal services and MSSP/MDRs.
According to industry analysts, the SIEM continues to be the "operating system of the SOC" and is not going away anytime soon.
However, most organizations face the challenge of how to continuously assess and strengthen the effectiveness of their existing SIEMs, using standard frameworks like MITRE ATT&CK to measure their readiness to detect the highest-priority threats. This is a major challenge because organizations have to grapple with constant change in adversary techniques plus constantly expanding attack surfaces, combined with the difficulty of hiring and retaining skilled detection engineers.
Key report highlights
- Enterprise SIEMs only have detections for 24% of all MITRE ATT&CK techniques. That means they're missing detections for around three-quarters of all techniques that adversaries use to deploy ransomware, steal sensitive data and execute other cyberattacks.
- SIEMs are already ingesting sufficient data to potentially cover 94% of all MITRE ATT&CK techniques. But many enterprises are still relying on manual and error-prone processes for developing new detections, making it difficult to reduce their backlogs and act quickly to plug detection gaps.
- 12% of SIEM rules are broken and will never fire due to data quality issues such as misconfigured data sources and missing fields — resulting in increased risk of breach due to undetected attacks.
- Enterprise SIEMs are following best practices and collecting data from multiple security layers such as Windows endpoints (96%), network (96%), IAM (96%), Linux/Mac (87%), cloud (83%) and email (78%). But monitoring of containers lags far behind other layers at only 32%, despite Red Hat data showing that 68% of organizations are running containers. This low number could be because it's challenging for detection engineers to write high-fidelity detections to uncover anomalous behavior in these highly-dynamic environments.