Putting Patch Management in Perspective


Whether scanning and patching “vulnerable” systems, or urgently reacting to a vendor’s patch release, many organizations have become more and more reactive when it comes to dealing with electronic security.

We are surrounded by reminders, references, admonitions, update announcements, alerts and infamous “Patch Tuesdays.” Many IT professionals are bombarded with patching advice such as, “If you just did a better job of patching, you wouldn’t have had that worm;” or “Threats are following closer and closer to the publication of vulnerabilities;” and, “You had better work harder, faster next time and you better buy a mega-buck patch management system.”

These are the recurrent messages that are forever haunting operating system vendors, auditors, regulators and security experts. But is vulnerability patching the end all solution to reducing electronic risk?

No Panacea

While patch management is certainly an important aspect of a comprehensive security strategy, organizations need to realize that it is not the only solution for effectively sealing electronic holes in your network infrastructure. In fact, patching can be an incredibly expensive, infringing and time-consuming activity that can sometimes cause errors and trash applications.

The expense, time and potential adverse side-effects might well be tolerable if patching was as good at reducing risk as we all assume. But, usually, it is not.

In this age of vulnerability disclosures, it is important to note that not one application is developed without an error. In fact, more than 20 years ago, IBM discovered that the typical developer makes one or more programming errors per hundred lines of code, which makes for a lot of patches.

Even the best quality-driven programming production teams generate only one or two errors per thousand lines of code. Let us assume that only ten percent of these errors can be related to security issues. So even the best case scenario entails at least one security related coding error per 10,000 lines of code.

Most current operating systems such as Windows, Macintosh, Sun, HP and Linux, have 20-to-40 million lines of code, which therefore includes approximately 2,000 to 4,000 security-related coding errors.

Of course, to do anything useful we must also count the errors in our applications which would about double these numbers on a typical computer. And to get the complete picture, we need to account for the universal security axiom: vulnerability is inversely proportional to complexity.

Complexity includes the interrelationships between various functions in a program, between a program and other programs, among various inputs and API’s, the communication between different computers over different networks to other computers on other networks, etc., and complexity is growing exponentially.

Finding and patching vulnerabilities does generally repair some known security issues. However, those vulnerabilities that are discovered and published for any particular system or application each year are only a fraction of the errors that our systems actually contain—and the vast majority of them will never be discovered.

So, patched or not, our systems and applications will always have many, many vulnerabilities.

Rushing to Patch

The Common Vulnerabilities & Exposures (CVE) listing averages 1,418 confirmed electronic vulnerabilities each year for the past five years. CERT’s counts are more than double that—recently averaging more than 3,700 new electronic vulnerabilities each year.

Yet, the total number of unique vulnerabilities that have ever been written into worms or botnets (as an example) and ever become risky are on the order of 10 per year, or significantly less than one percent.

Those that are part of a “successful worm” (those “in-the-wild” that actually infected any computer anywhere) can be counted on one hand, and those that ever make it into a successful hacking tool have never exceeded two percent.

If you just consider Microsoft vulnerabilities, less than 10% in any given year are a part of wild, malicious activity.

So why are we patching the other 90-to-98 percent?

With a reasonable logic model, any group of security experts can quickly eliminate a rush-to-patch for over 50% of newly described vulnerabilities because no threat related to it is plausible; because doing such an attack would require ridiculous horsepower; or because placement of the hypothetical attacker would be infeasible.

Specialty groups who watch “hacker chatter” and track other indicators such as an author’s attack and sharing history can eliminate another 50% of vulnerabilities based upon the author’s tendency to create and share attack code with the bad guys or the good guys.

Finally, it is important to understand how vulnerable your organization is as a whole to an attack. Of course you have vulnerable computers, but how vulnerable is your organization given all of the various filters, policies, segments, practices, etc.

By imagining a theoretical attack scenario based on a particular vulnerability, and testing those scenarios against the security technologies and practices already in place, organizations can typically eliminate unnecessary patch updates by another 50%.

A combination of all three methods can typically eliminate 80-to-90 percent of newly described vulnerabilities from many ‘rush to patch’ considerations.

When to Patch

Patching is a particularly good control when the attack is likely to be targeted at one or a few computers, when the site has a very small number of computers (like a home or very small business network) or when a particular class of computers is manageable and highly exposed (like Internet-exposed computers in most corporations).

For the enterprise, it makes sense to focus patching efforts on a select computer base. “Emergency” patching often distracts IT staff from taking the necessary steps to protect your organization. And, of course, patching does not work against the “zero day” attack.

For most mass attacks and worms, you must patch very nearly 100 percent of computers to avoid significant impact. Most rush-to-patch activities lead to 70-to-80 percent coverage, which is almost the same as not patching at all for these mass attack scenarios.

Long before a new vulnerability is published that has a high risk of becoming a pervasive exploit, you could have implemented hundreds of generic, inexpensive, proactive and low-infringement countermeasures that would reduce your organizational vulnerability substantially to both old and yet-to-be-discovered vulnerabilities.

These “synergistic” controls effectively compliment more fundamental controls like firewalls, identity technologies and anti-virus products. For instance, Zotob required TFTP or FTP on PCs to be installed, named correctly and in the path. Blaster required DCOM to be installed, wide open and to have a default configuration. Code Red required Web servers to run ISAPI services that almost no one uses. And the list goes on.

Renaming TFTP or FTP, moving them or removing them from the path on Windows machines are all effective, generic “essential configurations.” Egress filtering at routers is only used by about two percent of corporations, but it stops well over 70% of recent back-doors, bots and worms.

Ingress filtering at Internet and VPN routers is used by less than eight percent of corporations but reduces both malcode and hacking attacks significantly. Most companies filter six or eight file types from email, but why not filter all file types except those dozen your company actually uses? That way, when someone dreams up a new one, you won’t be caught flat footed.

Using synergistic controls like these not only addresses the majority of “micro vulnerabilities” that will eventually be discovered and patched, but also addresses the macro-vulnerability and complexity issues (like TFTP’s use by Zotob) that also contribute to most successful attacks.

Once you get in this mode, you can evaluate each new vulnerability announcement against your synergistic controls, as well as your primary ones, and smugly skip the patch-o-mania this time because you’ll know you’re covered.

Peter Tippett is CTO of Cybertrust, Inc. and chief scientist for ICSA Labs, a division of Cybertrust. He specializes in the utilization of large-scale risk models and research to create pragmatic, corporate-wide security programs.