[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.GSO.4.64.1101121024020.2904@well.com>
Date: Wed, 12 Jan 2011 11:36:34 -0800 (PST)
From: Vic Vandal <vvandal@...l.com>
To: full-disclosure@...ts.grok.org.uk
Subject: Re: Getting Off the Patch
While this idea may work in small shops, it won't scale to large ones.
There are something like 800 heterogeneous servers where I work. Small
clusters of like-purpose servers are allocated to hosting many different
processing components that make up the enterprise architecture. Applying
purpose-specific hardening is a goal, but one that is extremely difficult
to achieve and then maintain. And at the end of the day if you have a
server cluster hosting MS-SQL or Oracle or Apache or IIS or whatever, AND
only the necessary listening services are on, AND there is filtering to
allow specific source and destination traffic, IF there's an identified
vulnerability in any of those available services the machines must be
patched to mitigate system and data risk.
Even with services/daemons/etc. that aren't used and have been disabled,
you can't rely on them remaining that way. Some newly installed component
could require starting them up, or some Sys-Admin could make a
configuration mistake and start up some vulnerable service(s). So if
there is software installed on a system and that software has a known
vulnerability and an available patch, any smart resource owner is going to
mandate that the patch be applied to mitigate "potential" risk. If they
don't and the system and/or data is compromised, that resource owner might
have a hard time explaining how due diligence was exercised to absolve
themselves and the organization of any data breach or service delivery
liability.
As for having to spend a lot of cycles testing patches, those days of half
of the patches being applied breaking something are long gone. The risk
still exists, and maybe one or two out of every hundred operating system
or core software patches does break something. Vendors have gotten a LOT
better about releasing reliable patches. I say this as an InfoSec
engineer who has been playing this patching game for 20 years. But what
about that small percentage of patches that does break something? For
mission-critical servers any organization worth its salt has a Dev, QA,
and Production server environment. You roll out the patches to Dev, and
make sure nothing breaks while the developers are working daily in that
environment. Then you roll to QA and have someone test any app that could
potentially be impacted by the patch(es) deployed. By the time you roll
the patches to Production, the risk of an outage is almost nil. And for
the workstation environment, create a pilot group for patch deployments.
Deploy patches to their machines, see if anything breaks, and if nothing
does you then deploy the patches safely to the entire organization.
As for the cost of deploying patches and the time it takes, automated
patching tools are quite mature and robust these days. It takes a
security administrator, server administrator, or desktop administrator
mere minutes and a few mouse clicks to deploy patches to hundreds or
thousands of machines.
The other side of this patching coin is being audited. Many organizations
are mandated to have independent security audits of their infrastructure
performed. Those organizations and others may also have business partners
who want audit verification of how vulnerabilities are being mitigated.
And where an independent audit report shows that an organization isn't
applying patches for countless vulnerabilities on scores of systems, you
can bet that the concept and practice of patching will be embraced very
soon thereafter.
Just for clarity I'm not saying the proposed idea has no value. I'm a big
fan of system hardening via various means. If you're not running a
vulnerable service or it's not available to untrusted machines or users,
the chances of it being compromised are obviously diminished greatly. But
you shouldn't rely on that situation remaining static, and the smart move
is to patch vulnerable software or remove it from the system altogether if
it isn't needed. Obviously removal isn't an option when it comes to
operating systems. You could replace them with some B1 certified security
level system, but you're not going to be able to run a lot of common
business apps successfully on such an architecture. And even if you could
those apps could have vulnerabilities and need to be patched. Sandboxing
has value, but it doesn't supplant patching in my professional opinion.
I do know a way to do away with patching - have software developers stop
writing crappy code that doesn't do good input validation (cough). Of
course that is a nirvana not likely to be seen in our lifetimes.
Wow, did I just write an article damn near equal in length to the InfoSec
Island one posted that started this thread? Either I have free time to
spare or I'm really into the concept of patching known vulnerabilities.
Unfortunately for me it's the latter.
Peace,
Vic
_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/
Powered by blists - more mailing lists