[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <200310211726.18781.ken@vanwyk.org>
From: ken at vanwyk.org (Kenneth R. van Wyk)
Subject: No Subject (re: openssh exploit code?)
On Tuesday 21 October 2003 17:07, Robert Ahnemann wrote:
> I flip to the local radar and get some sort of proof that there might be
> a thunderstorm coming. Talk is cheap (as was said), so its up to the
> admin to verify if A) there is a real threat B) the threat applies to
> your systems C) the threat damage is worth the damage of 'unscheduled
> downtime'
FWIW, I agree that these are all reasonable steps to take in order to help
prioritize whether (exiting the analogy...) you should apply the patch to
YOUR systems. There's a couple other complicating factors that I haven't
seen mentioned in this thread, though -- apologies if I've overlooked them:
1) I've seen patches break applications. When applying a patch to a
production app server, it's a good career-stabilizing move to test the patch
to ensure that, if NOTHING else, the app still works once the patch is in
place.
2) Change management in some tightly controlled production data centers can be
extreme. This is particularly true for environments in which change
management has regulatory oversight -- such as in the pharmaceutical
industry, where servers have to be FDA certified (in the USA, at least).
That is, in some cases, even if you KNOW that the storm is coming and it is
highly likely to hit you, you cannot take the corrective action that you
think is called for. In cases like this, it may be prudent to look for other
workarounds to protect those production systems...
There's a lot of variables and complexity to the patch-and-chase process. If
were only so simple to run {windows update|apt-get upgrade|up2date|...} on
all of our systems, we would have figured it out by now. IMHO.
Cheers,
Ken van Wyk
http://www.krvw.com
Powered by blists - more mailing lists