If you followed along with Post 2 of this series, you will remember the argument about MTTD and its most significant structural flaw: the clock does not start until detection occurs. That flaw sits on top of a deeper problem, which is the one we are covering today. Before you can have a meaningful detection time, you need to have meaningful detection coverage. And most organizations, if they are being honest with themselves, have far less of it than they believe.
I want to be precise about what I mean by detection coverage, because the term gets used loosely. Detection coverage is not the number of rules in your SIEM. It is not the number of log sources you are ingesting. It is the percentage of known adversary techniques across the attack lifecycle for which your current detection logic would actually fire a meaningful alert if an attacker executed that technique in your environment today. That definition changes the conversation significantly, because it asks a harder question than most organizations are currently asking themselves.
The Data That Should Make Everyone Uncomfortable
CardinalOps published its Fifth Annual State of SIEM Detection Risk report in June 2025, and the findings are worth sitting with carefully. The study analyzed data from more than 13,000 unique detection rules and hundreds of production SIEM environments including Splunk, Microsoft Sentinel, IBM QRadar, CrowdStrike Logscale, and Google SecOps, using the MITRE ATT&CK framework as the benchmark for coverage assessment. [1]
Read those numbers carefully. The telemetry organizations need is largely already flowing into their SIEMs. The problem is not data availability. It is the gap between the data that exists and the detection logic that has been built to operate on it. Organizations are sitting on a mountain of telemetry and using only a fraction of it to actually detect adversary behavior. The traditional approach to detection engineering is not keeping pace with that opportunity, and the result is a coverage gap that most dashboard metrics will never surface because the undetected techniques are, by definition, invisible to those metrics.
Why MITRE ATT&CK Is the Right Organizing Framework
Before diving into how to measure coverage honestly, it is worth establishing why the MITRE ATT&CK framework is the right structure for organizing that question. ATT&CK is a knowledge base of adversary tactics, techniques, and procedures derived from real-world observations of threat actor behavior. It is not a theoretical model. It is a continuously updated catalog of the things attackers actually do, mapped across the full attack lifecycle from initial access through impact. As of version 18.1, released in late 2025, it encompasses 14 tactics and hundreds of individual techniques and sub-techniques for enterprise environments. [2]
The reason it matters for coverage measurement is that it gives you a common reference point that is independent of your tooling. A detection coverage question framed as “how many of our SIEM rules are active” tells you almost nothing useful. The same question framed as “how many of the 14 ATT&CK tactics do we have meaningful detection logic for, and which specific techniques within each tactic are we blind to” gives you an actionable map of where your program is strong and where it is not. One measures rule inventory. The other measures defensive capability against real adversary behavior. These are very different things.
A coverage question framed around rule count measures inventory. The same question framed against MITRE ATT&CK measures defensive capability. Organizations that confuse the two end up with high rule counts and large blind spots, sometimes simultaneously.
K.C. Yerrid
The additional value of ATT&CK for coverage measurement is that it lets you prioritize intelligently. Not all techniques carry equal risk. Some are used by virtually every threat actor category across every industry. Others are highly specialized and relevant only to specific adversary groups or target environments. When CrowdStrike reported that 81% of intrusions in the period from July 2024 to June 2025 were malware-free, that single data point significantly reshapes which ATT&CK techniques should be prioritized for detection coverage: credential access, lateral movement, and living-off-the-land execution techniques suddenly matter more than signature-based malware detection. [3] Coverage measurement done against ATT&CK makes those prioritization decisions visible and data-driven rather than intuitive and inconsistent.
The Three Layers of the Coverage Problem
In my experience, when organizations dig into their actual ATT&CK coverage, they tend to find the same pattern playing out across three distinct layers. Understanding these layers helps explain why coverage gaps persist even in programs with significant tool investments and experienced teams.
The first layer is tactic-level blindness. Most security programs have reasonable coverage for a subset of tactics and essentially no coverage for others. Endpoint-focused programs tend to have decent coverage for execution, persistence, and defense evasion, because those are the techniques that endpoint detection tools were built to catch. The same programs frequently have weak or nonexistent coverage for cloud-based lateral movement, identity abuse, and collection techniques, because those require different data sources and different detection logic that the program never built out systematically.
The second layer is the broken rule problem. The CardinalOps research found that 13% of existing SIEM rules are non-functional and will never trigger. [1] This is a coverage problem that looks like a coverage solution. An organization that has a detection rule for a specific ATT&CK technique is not actually covered for that technique if the rule will never fire because the data source it depends on is misconfigured, the log field it references does not exist in the current schema, or the rule logic has not been updated to account for changes in the underlying environment. Rule count and functional rule count are different numbers, and most organizations do not know the difference between theirs.
The third layer is coverage drift. Even rules that were working correctly at the time they were written may have silently degraded as the environment changed around them. New cloud workloads come online. Log pipelines change. Schema formats update. Authentication systems get replaced. Each of these changes can quietly invalidate detection logic that was built against a previous state of the environment. Without continuous validation that rules are actually firing against current telemetry, coverage claims based on rule inventory will gradually overstate actual coverage over time. Detection engineering, to be reliable, must be treated more like software development: version-controlled, continuously tested, and validated against the live environment on a regular cadence.
Building a Coverage Measurement Practice
The path toward honest coverage measurement starts with accepting that you cannot measure coverage from inside your detection tooling alone. Your SIEM can tell you how many rules you have and which ones fired recently. It cannot tell you which adversary techniques those rules actually address, whether they would fire against a real-world execution of those techniques, or what percentage of the attack surface they collectively represent. That analysis requires deliberately mapping your detection logic to ATT&CK techniques and then validating that the mapping reflects current reality rather than the state of the environment when the rules were originally written.
Beyond that starting point, the more systematic practice involves regularly running adversary simulations against your own detection stack. Tools like Atomic Red Team allow detection engineers to execute individual ATT&CK technique simulations in a controlled environment and observe whether the expected alerts fire. The results feed directly back into the coverage gap analysis and provide the ground truth that rule-count-based coverage estimates cannot. Organizations that treat detection validation as a continuous discipline rather than a periodic audit maintain a significantly more honest picture of what they can and cannot see. More rules, without that validation layer, simply adds volume to an inventory that may not reflect actual defensive capability at all.
Detection coverage is the foundational metric that every other security operations metric sits on top of. MTTD is only meaningful in the context of what you are actually capable of detecting. Response times only matter for the incidents that generate alerts. Compliance scores for your detection program reflect rule existence rather than rule effectiveness. Until an organization has an honest, ATT&CK-mapped, continuously validated picture of its actual detection coverage, it is making decisions about security investment and risk with a map that does not accurately represent the territory.
Post 4 in this series takes on vulnerability metrics: why the standard patch compliance model measures activity rather than risk, and what a risk-based approach to vulnerability measurement actually looks like in practice.
