Sometimes you can’t win. Patching, and the right time and process for doing so, is very much a case in point.
Patching used to need more planning and manual intervention, but as internet access has improved, many manufacturers now provide built-in Updater Services. Microsoft have taken this further, resorting to patch-guerilla tactics: Ambush Updates. They know what’s best for you, and if you won’t restart your PC then they will. Usually this will always be when it’s least convenient for you, such is Murphy’s Law.
It leaves many simply shrugging their shoulders and letting nature take its course. Better to let systems self-update, then clear up the mess if and when problems arise. It’s a simple risk/benefit assessment and much like attitudes to security breaches, if you’ve been lucky enough to avoid the expense and hassle so far, you probably assume it will never happen.
For the software producer, the chief concern is with making sure products are secure. The convenience for the user and any consideration for the impact on other software is secondary. That’s not to say a manufacturer won’t test their updates before releasing them, but guaranteeing success for everyone across an endless variety of unique IT environments is impossible.
At one end of the spectrum, anti-virus systems must update on-demand to maintain protection. Similarly, browsers and email clients – overwhelmingly the “front door” for malware attacks – will also need regular, time-critical updates. Then there are aligned technologies, such as Java and Adobe, equally super-common mediums for attacks and always in need of patches. The most recent Verizon Breach Report records Java as the most common first-stage malware vector.
Even at this level there should be a consideration towards software inter-dependencies, but moving up the software-scale in terms of complexity, towards operating systems and databases, patching becomes much more risky. How much can you rely on siloed manufacturers to guarantee full-compatibility for your “mission-critical” applications?
Case Study: One banking client of ours has concluded that safety-first patching for them means “Don’t patch.” They run an important application on RHEL 5, even though the platform was retired last year. Chances are the application could work, or be made to work, on the more secure and better performing RHEL 7, but nobody wants to roll the dice.
And with good reason. Just recently according to Computerworld, Windows 10 patches have introduced problems with RDP operation (CredSSP) and disastrously affected various SSD drives, while for Windows 7, patches mistakenly removed support for certain network interface cards.
So patching still carries risk, just that for most, the potential operational problems are outweighed by the security jeopardy. Everyone knows about WannaCry and its rapid worldwide proliferation, exploiting the Eternal Blue SMB vulnerability. It’s a stark example of why patches should be applied without delay. Updates to remediate the vulnerability had been available for weeks, but for many, the opportunity was missed.
But there are other good reasons to delay patching, also in the interests of security. A pre-rollout test will save hassle in the long run and is a standard practice for many. By deploying updates to isolated test systems first, or to your most tolerant, IT-savvy users (a.k.a. Lab Rats), you can head-off problems before rolling out patches to all devices. For the sake of a brief hiatus, you strike a good balance between functional and security risks.
And what are the Security Best Practice recommendations for patching? Security control frameworks, such as the CIS Controls, are based on decades of thinking by the best brains in cybersecurity, and we should take these into account. Even though a Change Management process isn’t as much fun as installing a new security gadget, it can be just as valuable for keeping you safe. By embracing the concept of Change Control, you specify when changes are going to be made and, more valuably, you know when changes shouldn’t be seen. The upside? Unplanned changes – including breach activity – are highlighted and isolated from intentional, approved changes.
Contemporary system/file integrity monitoring technology can be automated to intelligently identify patterns of changes, classified as “known safe.” When integrated with your ITSM platform, this means change control needn’t be a dreary bureaucratic burden of change approvals and forward planning. Taking things further, you can also leverage “second opinion” sources of threat intelligence, such as file whitelists, to automatically analyze and approve change activity. It means you can operate with the flexibility to make changes when needed, and still benefit from change control. A win-win, at last.