Are vulnerability scan reports dangerous?

August 3, 2021  |  Kason Smith

What is a vulnerability scan report?

Vulnerability scan reports are requested from a wide variety of people or entities for many different reasons.  Historically a report meant a static snapshot of the scan data. Some company stakeholders may want an executive overview of the current vulnerabilities present in their environment. In contrast, others may want additional data points such as trending to reflect how well they have made progress in remediating previous vulnerability scans detected.

IT administrators and security and compliance personnel in an organization may also use reports to manage and monitor their remediation efforts.  Vulnerability reports can be manually sorted or automatically classified utilizing a ranking system to determine which vulnerabilities are critical. Examples of rating systems would be “High, Medium, and Low” or any variation of a numeric scale.

Are vulnerability scan reports static or dynamic?

The vulnerability scan report can be generated in multiple forms, including static and near real-time interactive dynamic reports. There are several risks associated with leveraging a static report to measure and track vulnerabilities. Because of the dynamic nature of an organization’s network and the consistent emergence of new threats, static reports can often be outdated as soon as they are created.

Why could static data captured in a report be so dangerous? The first and most obvious reason would be the security risk of the report being intercepted by a malicious actor.  When you export and transmit sensitive data, you increase the risk of exposure.  Reports can be high-level overviews formatted in an executive summary, or they can contain complete details, including every finding identified on every asset found.

Vulnerability reports will often include references to resources on the Internet with details on how to exploit the vulnerability found, often with links to the actual exploit used. Intercepting unencrypted emails that contain the reports, or data at rest stored in unsecure environments, may provide access to this report to unauthorized persons

Another consideration for not using static reports would be that the testing process used to evaluate the host has changed since it was captured and reported. Typically, when scanning for vulnerabilities, some sort of scanning engine is used to run a battery of tests against a host or application.

Sometimes host agent applications are installed to collect information about the operating system, configurations, or applications installed on the host. When running these tests, the scanning tools will leverage some vulnerability database of current tests and results available to discover flaws or weaknesses in the code of an operating system or application. 

Scans can also look for misconfigurations or configuration drift on a particular host. The vulnerability and acceptable configuration databases used for fingerprinting and testing each host can change daily.  Once those supporting databases are updated, the previously executed tests may not reflect all current known vulnerabilities or misconfigurations.

One argument for static reports is that the assets scanned are unchanged, so if there are no changes to the host itself, all current scan data should be accurate, right?  Not always, new vulnerabilities are discovered daily.  In January 2021, Redscan produced a report that analyzed NIST security trends in 2020 and found 18,103 Vulnerabilities disclosed in 2020 alone, at an average rate of 50 a day! 57% of these vulnerabilities were classified as “critical” or “High.” Each time a new vulnerability is discovered, a further evaluation of the host needs to be done, or a new report may need to be generated leveraging the new evaluation criteria.  

How are vulnerability scan reports useful in prioritizing remediation?

Reports often rank or prioritize remediation efforts.  The difficulty of remediation can vary depending on how the company is structured and company resources, but many vulnerabilities identified will include CVSS scoring that can classify the difficulty of remediation efforts. The process of remediation can be dramatically different between organizations.  Many organizations will leverage the ranking system of the discovered vulnerability given by the National Vulnerability Database using the associated CVSS score. 

Many of the scanning tools used to test for vulnerabilities can apply their own ranking system or severity grade. The unique ranking systems provided by the scanning tools are influenced by public vulnerability data and typically weighted by some internal data unique to that scanning platform.  Reports are often generated and sorted by any of these severity rankings, and a top-down list of vulnerabilities can be given to individuals or teams to start remediation efforts.

Organizations with a more mature vulnerability management program will leverage more information when ranking vulnerabilities using a risk-based vulnerability approach to determine the true threat on the issue when evaluated within their specific organization.  One of the factors included in a risk-based approach could be mission-critical assets for daily operations, such as a Web-based server for an e-commerce site.  Other factors considered could be where the asset is deployed.  Is the asset on the permitter of the network, internal, or maybe in a high-security enclave?  Every location could present different variables to consider and be weighted as part of the risk evaluation.

Why is up-to-date data so critical?

As we consider how to review and view vulnerability data dynamically, we first need to review how to keep those data feeds as current as possible.  Capturing vulnerability data can be done in many ways.  On external-facing hosts or applications using the public IP address or domain, most scanning tools utilize cloud-based scanners and require little or no deployment steps from the organization that is being scanned.

Using a physical or virtual machine on an internal network is another way to capture data by running scans against a host.  Other methods are to collect data would be to deploy a passive scanner to capture network traffic.  Passive scanning can be used to look for rogue hosts on the network and assist with the very challenging task of keeping an accurate inventory of assets that have access to the network.

Installing a host-based application on a host to collect scan data near real-time is the preferred way to capture it.  These host-based applications are very lightweight and keep the inventory application version history up to date with no assumptions.

When we consider the number of variables that can change and how quickly new results can be factored in, we can now see how a snapshot or static report created can be outdated as soon as it’s captured. So, what can we do to increase our visibility and ensure we are looking at the most accurate and relevant data?  We need to move to dynamic data reporting for all available options.

This can be done for all stakeholders involved in a mature vulnerability management program, including Security, Compliance, Operations, etc. Dynamic reporting is typically interactive and will allow the viewer to manipulate the data on the fly with a few preconfigured clicks of the mouse. 

What’s new in vulnerability scanning reports?

Today’s scanning solutions provide a much more accurate and robust way to collect, sort, and prioritize scan data in near real-time than tools used even five years ago.  Understanding and viewing vulnerability data in today’s world requires a paradigm shift to digest vulnerability data. Dynamic dashboards can be created for different use case scenarios and tailored for other users depending on their specific roles and responsibilities. 

For example, an executive can access a read-only summary report updated in near real-time, providing up-to-the-minute trending and risk assessment metrics. Security stakeholders can have dynamic reporting focused on security metrics, the compliance team will get compliance-centric reporting, and IT teams .gets focused on operational reporting

So, are vulnerability scanning reports dangerous? 

No, they are a vital part of how we move through the ever-changing process of a mature vulnerability management program.  What is essential is how we create, use, and share the data found.  Static reports still serve a purpose in tracking and sharing vulnerability statistics and are required for some compliance standards, such as PCI DSS.

As changes are made to the assets or the frequency in which they are evaluated and ranked, we can create interactive dynamic dashboards that provide reporting.  Using dynamic tools that continuously evaluate vulnerabilities based on the most recent vulnerability signature finding and compliance failure findings available will significantly improve the accuracy of the findings and keep remediation efforts aligned with the overall organization’s preferred security posture. 

Treat the compelling data discovered by a vulnerability management program with the security protocols needed that they deserve, but always have the most recent data at your fingertips to guide your next steps.

Share this with others

Get price Free trial