GRC Viewpoint

Why Your Vulnerability Management Program Fails

How do you measure the success of your vulnerability management program? Research by Ponenom found that 62% of organizations that were breached were unaware that they were vulnerable. This means that, regardless of the KPI you use, the chances are your program is leaving you exposed. 

However, an analysis conducted last year discovered that patching just 12% of all possible updates would yield the same likelihood of preventing a breach by an APT as patching 100%. According to Gartner, “Enterprises fail at reducing their exposure to threats through self-assessment of risks because of unrealistic, siloed and tool-centric approaches.” Organizations must rethink their approach to vulnerability management to ensure maximum security with a minimum of resources. 

Calculating the ROI of vulnerability management

The true value of such a program is the sum of all the breaches that didn’t occur. But how do you go about calculating such a figure? It is not feasible to estimate the number of theoretical breaches you avoided and multiply it by the average cost of a breach.

In order to assess the impact organizations should track the delta in strategic metrics, such as the lost revenue due to exploited vulnerabilities, the number of lapses in compliance, and the number of hours that customers were impacted by a vulnerability. This could be supplemented by metrics that determine the operational effectiveness of the program. Analysts suggest that an organization could measure its ability to remediate vulnerabilities with the following metrics:

  • Mean time to resolve/mitigate critical vulnerabilities on critical assets
  • Mean time to detect critical vulnerabilities
  • Average window of exposure
  • Number of security incidents caused by exploited vulnerabilities

However, if organizations are tracking the above, why do 3 out of 5 that have been breached say it occurred because a patch was available for a known vulnerability but not applied? 

Flaws in vulnerability management

Many organizations’ programs struggle because the strategic and operational metrics listed above require a solid tactical foundation. Tactical components that must be achieved for a vulnerability management program to succeed include what percentage of vulnerabilities can be detected, how effectively the vulnerabilities are ranked in terms of risk, and the speed at which remediating actions can be implemented.

Historically, the above criteria have relied on human-powered workflows. Toolsets do exist to aid teams, but they are often limited in scope:

  • Penetration testing often occurs on an annual basis, providing only point-in-time assessments of vulnerability. Digital transformation has resulted in an increase in new external facing assets, and processes are needed to identify any changes in real time.
  • The result of vulnerability scanners often needs to be validated to remove false positives, requiring human’s recreate the exploit. At a time when 62% of organizations say they aren’t sufficiently staffed, accuracy is required to avoid waste.
  • Cyber threat intelligence solutions or the KEV catalog need to be consulted in order to determine the likelihood of exploitation. When 77% of organizations lack the resources to keep up with the volume of patches, automation of tasks is needed.

Improving vulnerability management

Organizations must begin by focusing on where attacks are most likely to strike. 69% of organizations have experienced an attack targeting an unknown, unmanaged, or poorly managed internet-facing asset. Organizations should review their external attack surface with the goal of minimizing exposure to risk using the following capabilities:

  • Risks should be continuously monitored by identifying new assets or changes to existing ones. Detecting changes using event-based scanning allows for the external posture assessments to always be up to date.
  • Risks should be identified using methods that are accurate and transparent. The context of external facing assets needs to be understood to ensure that the right tests are performed, preventing inappropriate tests from being run and producing false positives.
  • Risks should be scored based on the likelihood and impact of the exploited vulnerability. Prioritization should automatically augment baseline scores including CVSS with real-world insights, such as whether an exploit is actively being used.

In summary, organizations should review whether their current vulnerability programs can produce the desired results. In many cases, new external risks appear in such high volumes that existing workflows can be overwhelmed. However, with the right capabilities organizations can minimize their exposure to risk and the likelihood of a breach.


By Alex Wells, Head of Product Marketing at Hadrian

Related Articles

Latest Articles