7 Guidelines for Vulnerability Management Scanning

April 1, 2015  |  Ed Bellis

Lessons learned in vulnerability scanning to make your life easier

As anyone reading this likely already knows, security can be a thankless job. I know, I've been dedicated to it for over 20 years. When I ran Security at Orbitz, it was absolutely critical that my team and I stayed on top of threats, attacks and potential exploits. And we had to ensure that our execution was flawless, every day, despite the fact that the influx of new data and threats was never ending. Managing this became a critical part of our program. Vulnerability management scanning was a key part of the job.

While in the trenches, we developed a series of guidelines for working with vulnerability scanners and penetration testing tools. I found that following these dramatically improved our company’s security posture and efficacy, and helped us and our teams really focus on the most important stuff. You have to be relentless when prioritizing in order to get the most out of your teams and tools.

Here’s what we learned:

1. CVSS is great. But it’s only part of the picture.

The Common Vulnerability Scoring System (CVSS) is table stakes these days when examining vulnerability scan results, but you need to be careful to not place too much reliance on CVSS when prioritizing your remediation tasks. CVSS has the ability to add temporal data in the effort to account for changing threats; however, temporal scores can only lower and not raise the actual score. So if you look at CVSS and only focus on the 8’s, 9’s and 10’s, you may be missing critical issues buried in the middle of CVSS.

Let me give you a hot button, commonly referenced example: the Heartbleed vulnerability exposed the majority of web servers running over SSL on the Internet and allowed for the leaking of data (including the very encryption keys that protected them). But how did CVSS rate Heartbleed? It scored at only a five. Why did CVSS misread Heartbleed so badly? The scoring system doesn’t allow for a high score on a vulnerability whose impact is “information leakage,” even though in this case the information being leaked could have been—and was—highly sensitive. In fact, private keys could have been stolen and used to decrypt SSL traffic that likely contained all sorts of important information for your company and customers. After all, what is SSL used for but to encrypt the traffic you deem private or sensitive.

You have to take into account an ever-shifting threat landscape and model, asset priorities, and mitigating controls in order to take a holistic approach to prioritized remediation. The importance of assets, the processes they are responsible for, and controls around them all need to be considered. Having a CVSS 10 on a machine that is not connected to anything and serves as the archive to your random lunch picker probably shouldn't be in the same realm of priority as the CVSS 5 vulnerability that sits on your OLTP database housing the keys to the kingdom.

Additionally "fixing" a vulnerability might be suboptimal. Sometimes mitigating the vulnerability or simply turning a vulnerable service off is the right answer. Pragmatism is an important skill in vulnerability management guidelines.

2. Authenticated scans are your friend.

One of the common complaints of vulnerability scan results is false positives. While not foolproof, running authenticated scans can go a long way to removing false positives and has the added benefit in many cases of providing a CPE fingerprint.

If you’re not familiar with the acronym, CPE stands for Common Platform Enumeration. The CPE fingerprint is a machine-readable representation of what is running on a particular asset down to the exact version. This will give you the ability to track assets, and it can serve as a poor man’s asset management system. One quick security benefit of knowing what your assets are running is knowing when new vulnerabilities come out and what assets they affect without running a scan. That's right, you heard me correctly. Sometimes the best practice you can implement in your vulnerability management program is not having to scan at all. If i know the configuration of my asset down to the platform, version and language why have my vulnerability scanner perform the guesswork? Skip straight to patching and remediation! You can find more about CPE here: https://cpe.mitre.org/

Of course, authenticated scans apply to your web application scans as well. Authenticated web application scans allow for detection of vulnerabilities to the protected areas of your application, which is likely where the valuables are stored and processed. I’m still surprised to see many organizations scanning their publicly facing web applications, but not setting up authenticated scans. Scanning your web applications that sit behind authentication is critical in your application security program. This is likely where some of the most important functions of your application reside and aligns well with the asset prioritization I mentioned above.

One note here: authenticated scans will produce a greater volume of results in vulnerability scanning. This means, of course, that you must be mindful and test these scans on non-production systems prior to rolling out. It also means not relying solely on CVSS per the guidance in point 1 since you can be dealing with a greater volume of results.

3. Remember the OSI Model has 7 layers.

As mentioned above, you can’t forget about your applications. I see an amazing number of organizations that are far along in their network vulnerability scanning programs, yet they aren’t doing anything with their applications. Scanning your applications is a key component of a good vulnerability management program. Two areas that are continuously problematic each year in the Verizon DBIR are misconfiguration and application security.

While scanning your applications, it’s worth noting that doing so only represents the basic blocking and tackling when it comes to a mature application security program. Needless to say, as your program matures, your program is likely to include threat modeling, static analysis, penetration testing, code reviews, etc. You cannot ignore layer 7, which has become one of the most common targets for attacks and breaches.

4. Don’t dump-and-run, make it consumable.

You know what I’m talking about when I talk about the infamous dump-and-run. “Here’s your 300 page PDF with a laundry list of every vulnerability known to man!”

From what I’ve seen, being the recipient of a dump-and-run is handled by systems administrators, developers, network engineers and other remediators exactly the same way: by filing it in the trash. The least effective way of getting critical issues fixed in your environment is the oversized PDF dump.

You need to make scan results consumable and actionable for those responsible for remediation. SysAdmins don’t want a laundry list of vulnerabilities listed out by their CVE identifier; they need an actionable list of what needs to get done, such as deploying a specific patch or updating to a specific group of assets with their relevant identifiers.

As Gene Kim so eloquently stated, “The rate at which information security and compliance introduce work into IT organizations totally outstrips IT organizations ability to complete, whether it’s patching vulnerabilities or implementing controls to fulfill compliance objectives. The status quo almost seems to assume that IT operations exist only to deploy patches and implement controls, instead of completing the projects that the business actually needs.”

Or to put it another way...don’t be that guy.

5. You can actually prioritize, rather than just analyze.

Typically, security teams spend tons of time putting together Excel spreadsheets and swimming through countless rows of data. Doing so will get the job done, eventually...kind of. But the problem is, as soon as you manage to rise to the top of your current data ocean, another wave will hit you. You can’t prioritize what to fix if you can’t even keep up with the inbound volume of data regarding potential threats, breaches and attacks.

What you need is a way to immediately prioritize the data in front of you. This is a case where tools—rather than elbow grease—may be of help. In other words…. work smarter not harder.

Vulnerability management and scanning guidelines are also involved with avoiding the “known gotchas.”

6. Avoid DHCP Gotcha

DHCP is commonly used in organizations to manage IP addresses on various assets. It’s very popular in desktop environments and allows for flexible management of networks. However, DHCP introduces complications when managing vulnerability scan results.

By default, vulnerability scanners often report results by IP address. But what if the next time you scan your network all your assets have changed their IP address? It turns into a mess, and it becomes impossible to track remediation.

It’s best to segment scans against DHCP assets and use a different identifier for these such as hostname, NETBIOS or other identifier. You’ll need a flexible system to manage the scan results to ensure that you’re able to track remediation accurately without adding additional complications. If you combine this with the CPE data from tip #2 you’ll be ahead of 80% of the companies I’ve seen.

7. Avoid Load Balancers Gotcha

Load balancers are wonderful, except when you’re scanning for vulnerabilities on the other side of them. Similar to the asset issues with DHCP, you’ll need to make sure you have access to the DHCP configuration so you can track which IP address and port numbers are responding with the reported vulnerability. Without this critical information, you’ll be left with a virtual IP address as an asset identifier (and a whole lot of guesswork).

So those are few of the tips to give you a jumpstart in managing your security issues. I hope they help you build a stellar security and vulnerability management program at your own organization!

Share this with others

Get price Free trial