In 2025, 48,185 CVEs were published. That number represents an unprecedented volume of disclosed vulnerabilities, and it arrived in security programs that were already struggling to keep pace with what came before it. Microsoft’s October 2025 Patch Tuesday alone addressed 172 vulnerabilities including six zero-days. [1] The scale of the vulnerability landscape has outpaced the traditional patch management model so thoroughly that the metrics designed to measure that model are now measuring an increasingly fictional version of the problem. Patch compliance rate, the most widely reported vulnerability metric in the industry, was designed for a world where you could patch everything on a reasonable schedule. That world no longer exists. The metric has not caught up to that reality.
I want to be fair to patch compliance rate before criticizing it. It served a purpose when the vulnerability landscape was smaller, the cadence of disclosed CVEs was lower, and a dedicated patching team could realistically work through the queue with something approaching completeness. In that context, measuring how many systems were patched within a defined SLA window was a reasonable proxy for whether the organization was staying ahead of known vulnerabilities. The problem is that the context changed and the metric did not. Today, a 95% patch compliance rate tells you that 95% of discovered vulnerabilities were patched within your SLA window. It tells you almost nothing about whether the 5% that were not include the vulnerability an attacker is actively exploiting right now.
The CVSS Problem and Why Severity Alone Misleads
The patch compliance model is built on CVSS, the Common Vulnerability Scoring System, and CVSS has been the backbone of vulnerability prioritization for nearly two decades. It provides a numerical score from 0 to 10 representing the technical severity of a vulnerability based on factors like impact, exploitability, and attack vector. The logic is intuitive: higher score means more dangerous, so patch higher-scoring vulnerabilities first.
The limitation of that logic became visible in the data some time ago and has become impossible to ignore at current vulnerability volumes. Research from FIRST, the organization that maintains EPSS, found that of all CVEs scored at CVSS 7 or higher, which is the threshold most organizations use to define high and critical vulnerabilities requiring urgent attention, only 2.3% were actually observed in exploitation attempts over a given month. [2] Organizations following a CVSS-first prioritization model are, by the numbers, spending the vast majority of their remediation effort on vulnerabilities that nobody is actively trying to exploit.
That third number deserves particular attention. In Q1 2025, 28% of exploited vulnerabilities carried only medium CVSS base scores. [3] An organization operating on a CVSS-driven prioritization model that reserves urgent remediation effort for high and critical vulnerabilities is systematically deprioritizing more than a quarter of the vulnerabilities attackers are actually using. The score is measuring theoretical severity. Attackers are not constrained by theoretical severity. They go where the opportunity is.
Edgescan’s 2025 Vulnerability Statistics Report makes a related point with equal clarity: no single risk scoring system is sufficient. EPSS, CISA KEV, CVSS, and SSVC offer valuable but sometimes contradictory guidance, and vulnerabilities can have a high CVSS score, a low EPSS score, and an SSVC decision of “Act” simultaneously, requiring context that none of the individual systems provides on its own. [4]
The Modern Scoring Toolkit and What Each Piece Actually Measures
If CVSS alone is insufficient, the answer is not to abandon it but to use it as one signal among several. Three additional data sources have become essential for risk-based vulnerability prioritization, and understanding what each one measures, and what each one cannot tell you, is the foundation of a more honest vulnerability metrics program.
The practical value of combining these three sources is substantial. Research published in 2025 analyzing 28,000+ CVEs found that combining KEV and EPSS alongside CVSS could reduce the urgent prioritization workload by approximately 95%, from roughly 16,000 vulnerabilities that meet the CVSS 7+ threshold down to approximately 850 that have actual evidence of exploitation or high exploitation probability, while maintaining comprehensive coverage of the techniques attackers are actually using. [5] That is a transformative operational improvement, not a marginal one.
What Risk-Based Vulnerability Metrics Actually Look Like
Moving from CVSS-first to risk-based vulnerability measurement requires building a prioritization model that accounts for three dimensions the traditional patch compliance model ignores: exploitation likelihood, asset criticality, and environmental context. The first two can be addressed through the scoring toolkit described above. The third requires something that no external scoring system can provide: knowledge of your own environment.
A medium CVSS vulnerability on an internet-facing single sign-on gateway is categorically more dangerous than a critical CVSS vulnerability on an air-gapped research system. The CVSS scores tell you exactly the opposite. Environmental context, specifically whether a vulnerable asset is internet-facing, whether it handles sensitive data, whether it sits in a path that would give an attacker meaningful lateral movement capability, and whether compensating controls are in place that reduce the exploitability in your specific configuration, is the dimension that external scoring systems cannot provide and that risk-based vulnerability metrics must therefore capture internally.
One practical framework for building this into your vulnerability program is tiered SLA enforcement based on composite risk rather than CVSS score alone. Boards are increasingly monitoring metrics like the percentage of Tier 0 vulnerabilities closed within SLA, and that kind of tiered framing forces the program to articulate what makes something genuinely urgent versus just technically severe. [3]
The metrics that flow from a tiered model like this are fundamentally more honest than patch compliance rate. Instead of measuring whether vulnerabilities were patched within a fixed window regardless of their actual risk, you measure what percentage of Tier 0 and Tier 1 vulnerabilities were remediated within their respective SLAs. That number tells you whether your program is prioritizing correctly and executing on its most critical work. It also gives you something patch compliance rate can never offer: a metric that changes meaningfully when your risk posture actually changes, rather than one that looks good while your most dangerous exposures go unaddressed.
The Metrics That Compliance Leaves on the Table
Beyond the prioritization problem, there are several vulnerability-related measurements that most programs track poorly or not at all, and that carry more signal about actual risk than compliance rate does.
Exposure window by criticality tier is one of the most useful. Rather than measuring whether a vulnerability was patched within thirty days, measure the total days of exposure for each vulnerability weighted by its risk tier. A critical vulnerability left open for fifteen days represents a different exposure profile than a medium vulnerability open for the same period, and the metric should reflect that difference rather than treating both as equivalent entries in a compliance calculation.
Asset coverage is another. Most vulnerability programs have gaps in their scanning coverage, systems that were never enrolled, cloud assets that fall outside the scope of traditional scanners, shadow IT that nobody inventoried. Tracking the percentage of known assets with current vulnerability scan data, and the percentage of total estimated attack surface that falls outside scan coverage, gives the program visibility into where the vulnerability management program’s own blind spots are. [6]
Remediation verification rate rounds out the picture. Patching a vulnerability and verifying that the patch was applied correctly and the vulnerability no longer exists in a current scan are two different things. Programs that measure remediation without verification are counting tickets closed, not risk reduced. Edgescan’s research found that 45.4% of discovered vulnerabilities in large enterprises remained unresolved within a twelve-month window, predominantly in the network and device layer. [4] That persistence suggests that remediation metrics without verification are systematically overstating how much work is actually getting done.
The Honest Question the Metrics Should Answer
The core question that a mature vulnerability metrics program should be able to answer is not “how many vulnerabilities did we patch this month.” It is: “Are the vulnerabilities most likely to result in a successful attack against our most critical assets being remediated faster than attackers can exploit them?” That is a harder question to instrument, and it requires combining external threat intelligence with internal asset criticality data in ways that most programs have not built out. But it is the only question that actually measures whether the vulnerability management program is doing its job.
Patch compliance rate is not going away, and it should not. There are regulatory contexts and reporting requirements where it remains a useful and necessary metric. But it should be reported alongside, not instead of, risk-stratified remediation data, exploitation likelihood signals, asset criticality weighting, and honest acknowledgment of scan coverage gaps. Used in that context, it is one data point in a coherent picture. Used in isolation, it is the most reassuring lie in the vulnerability management program’s reporting deck.
Post 5 in this series examines the compliance score problem directly: why compliance scores and security posture are not the same thing, and what it would look like to use compliance data as the floor it should be rather than the ceiling it too often becomes.
