A lot of companies and the government have IT security departments, as well as operations divisions which keep things running on a daily basis.
The problem I have seen crop up repeatedly is that while there are many IT security staff putting out bulletins to patch servers or fix the settings that make it easy for bad players to get in and mess with your network or even steal your data, there are far fewer operations guys willing to patch and reboot to "apply" these fixes. Even management has a role in this complacency.
Part one of the story to management from the operations staff goes something like, “If I reboot that server and it doesn’t work right, I’ll have to restore from back-up, and that could take all day”, or “I can’t be sure all the application functions will work after that upgrade, can’t we wait a while to do it?”.
Part two is that ops guys sell this concept of "unrecoverable error states resulting from applying patches" to the managers who own the servers, who, not being server guys, believe that if they apply that patch they will "break the server".
After a while, you have such a backlog of patches to apply that you don’t do anything to fix server security issues because it will take too much downtime.
Server managers have little to be afraid of when a patch for their servers, operating system or applications is released
Security patching has grown up
It was true, years ago, when just about half the patches released by certain corporations dominant in the industry would require a subsequent patch to fix what the previous patch broke. In the meantime, you were either down or vulnerable.
But patching has matured a great deal since then. Most patches do not break applications any more. They rarely give us a server that will not boot. The “it’ll break the server to apply it” argument does not usually apply any more.
So, the point of this article is that our server managers have little to be afraid of when a patch for their servers, operating system or installed applications is released.
The owners/managers of these servers should have some notion of what is running and what patches are available for their systems. Groups large enough to have a CISO and an operations manager should have a meeting or lunch and schedule the patching before one of those “click here to become a Zombie computer” or “click this and the key logger will tell me everything you type” events happens on your watch.
It is easier than ever to make your internal computers part of a worldwide botnet doing illegal things that may open up your organisation to certain liabilities.
You might want to think about canning the guy who says you shouldn't patch
It is your network, and your data, but the technical reasons for failing to patch are fading into the past. Just don’t let anyone convince you that a patch from a major software supplier will do some irreparable damage so that you leave yourself (and your organisation) out there, on the internet, in an unpatched or vulnerable state. Someone will find it and exploit it, sooner rather than later.
You have a managerial responsibility to yourself to ensure the survival of your business or organisation. And you might want to think about canning the guy who says you shouldn’t patch.
Charles Abernathy is an information assurance subject matter expert, Joint Staff, Pentagon