Years ago, it was very painful to release patches to correct the errors, but with the advent of modems and then the Internet, patch distribution became very cost effective.
It could be argued that the ease with which patches can be distributed has fostered an environment of features, as opposed to an emphasis on mature development practices that inherently promoted stability and security. With today’s constant stream of patches coming from so many sources, however, and the tandem needs for security and stability applying pressure, organizations can no longer afford to patch and pray.
Why Are There So Many Patches?
Despite all of the advances over the years, software development is still immature. There are dozens of well thought out methodologies to assist in the defining of requirements, module interactions, code re-use, testing, etc. But let’s think about this for a moment: which parts of a computer have code in them? Let’s make a quick list: the CPU, BIOS, storage system, graphics card, network card, other hardware with on-board firmware, the operating system, device drivers, and security applications (including the anti-virus and personal firewalls), not to mention all of the in-house and third-party user applications. If you were to take a typical desktop PC, for example, the list of software/firmware written by various groups can quickly number in the hundreds and yet, they must all co-exist and many of the applications must work together in varying degrees.
The point is this: different teams using different personal styles, methodologies, tools, and assumptions generate all of this code, often with little to no interaction. When you combine the various pieces of software (i.e., all compiled or interpreted code, be it embedded in firmware or run in an OS environment), the results aren’t always readily predictable due to the tremendous number of independent variables. As a result, issues arise; and when development groups attempt to fix the issues, they generate software patches with all of the best intentions.
If we return to our basic principle — as software becomes increasingly complex, the number of errors in the code will rise as well — this also means that potential errors with the patches themselves will correspondingly rise as well. Furthermore, patches often contain third-party code, or ancillary libraries that are not directly designed, coded, compiled, and tested by the development team in question. Simply put, there are many variables introduced with patches.
Change Control
To be explicit, for the purpose of this article, patches are defined as a focused subset of code that is released in a targeted manner as opposed to the release of an entire application through a major or minor version code drop. The patch may fix a bug, improve security, or even update from one version of the application to another in order to address issues and provide new features. These days, of course, the security patch issues really get a lion’s share of media attention, but correcting security isn’t the only reason patches are released.
Regardless of the intent of a patch, the problem is that the introduction of a patch into an existing system introduces unknown variables that can adversely affect the very systems that the patches were, in good faith, supposed to help. Organizations that apply patches in an ad hoc (i.e., little or no planning taking place prior to development) manner are known to “patch and pray.” This slang reflects that when patches are applied IT must hope for the best.
Interestingly, in reaction to the often-unknown impact of patching, there appears to be one school of thought wherein all patches should be applied and another that argues that patches should never be applied. It is unrealistic to view the application of patches as a bipolar issue. What groups need to focus on is the managed introduction of patches to production systems based on sound risk analysis.
It’s All About Risk Management
In a perfect world, everyone would have the exact same hardware and software. This way, any new patch would perfectly install without issues. However, this perfect view is nearly impossible to attain on a macro/global scale, but does serve as an interesting thought experiment. The fact is that organizations will almost always have different environments than their vendors, peers, competitors and so on. Thus, any patch applied to existing systems carries a degree of risk.
Likewise, there are risks associated with not patching. What organizations need to do is assess the level of risk of each patch, define mitigation strategies to manage the identified risks, and formally decide whether or not the risk is acceptable. To put this in the proper context, let’s define a basic process for patching because risk management is a pervasive concern though the whole process, but risk management by itself does not define a process.
A Basic Software Patch Process
The patching process does not need to be complicated, but it must be effective for the organization and its adoption must be formalized. Furthermore, it is absolutely critical that people be made aware that the process is mandatory. The intent is to codify a process that manages risk while allowing systems to evolve. By creating a standard process that everyone follows, best practices can also be developed over time and the process refined. With all of this in mind, here is a simple high-level process that organizations can use as a starting point in discussions over their own patch management process:
1. Awareness
There must be active mechanisms that alert administrators that new patches exist. These methods can range from monitoring e-mails from vendors, talking to support groups, all the way to using automated tools, such as the Microsoft Baseline Security Analyzer, to actively scan systems for missing patches. These patches must be identified and added to a list of potential patches for each system.