lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Thu, 13 Jan 2011 09:48:43 +0100
From: "Cor Rosielle" <cor@...post24.com>
To: <full-disclosure@...ts.grok.org.uk>
Subject: Re: Getting Off the Patch

I am not responding solely on the opinions in the emails. I actually know
what is in the OSSTMM.

Let me start by saying that patching is not bad in itself. It can be a good
solution. It can even be the only solution. It can also be necessary to
patch a piece of software. And the OSSTMM won't tell you different. As
Pete's article reads, patching is just a small part of the solution.

One of the things with patches is, that people have an urge to apply them.
Vic wrote: "... if there is software installed on a system and that software
has a known vulnerability and an available patch, any smart resource owner
is going to mandate that the patch be applied to mitigate "potential"
risk.". Even if we doubt the patch will affect our system, we are often
uncertain what will happen if we don't apply the patch. Because we fear the
audit, we decide to cover our ass and apply the patch. Not because it will
make the system more safe, but because it will keep us from trouble. We even
install unnecessary patches, like a fix in mod_ssl where you're not using
SSL in the first place. Fear, uncertainty and doubt do not help in being
objective about the solution.

Most people don't realize the automated download and installation processes
can be attacked. There are even tools available to do so. Through such an
attack you might be applying a malicious or poisoned update on 800 of your
servers. When you follow the OSSTMM, you see this involves an access and a
trust, which both need to be controlled.

Patching is often considered necessary for passing an audit. The bad thing
is that auditors often don't understand a bit about security. By identifying
controls in several organizations they came up with a list of standards,
often called "best practices". And then they just check that list to see if
your company complies to the "security standard".
But those best practices is nothing more than a list of safety controls that
were used by some other companies, operating under different circumstances
than you are and at another time in history. In the best case there is some
evidence the control really did provide protection. 
The fact is that best practices can have some value. Most of the time it is
where people don't care to think what is best and just want to be compliant
to something and pass the audit. Perhaps they do care but don't understand
that being compliant is different from being safe and focus on compliancy.
When you follow the OSSTMM you can get some understanding to see how much a
best practice will increase (or decrease) your safety. It can help you to
decide to accept risks, simply because the control is more expensive than
the damage will be when you don't apply that best practice.

As Pete's article reads, patching is just a small part of the solution.
Following the OSSTMM you will get a good idea where the strong and weak
spots are in your systems and how to determine that. The method perfectly
scales, because it can tell you about level of safety of a single system, a
group of systems belonging together and even all systems in the entire
organization, no matter if there are 10, 100, 1000 or more.

The OSSTMM is not a list of good or bad things. It is not a list of controls
you have to apply and it does not take any decision for you. It just helps
you to identify the good and bad things in your environment, how to rate
those good and bad things, find the strong and weak areas in protection and
assist you in predicting how well a new safety control will actually
increase the overall safety.

Cor Rosielle
Chief Technology Officer


> -----Original Message-----
> From: full-disclosure-bounces@...ts.grok.org.uk [mailto:full-
> disclosure-bounces@...ts.grok.org.uk] On Behalf Of Vic Vandal
> Sent: woensdag 12 januari 2011 20:37
> To: full-disclosure@...ts.grok.org.uk
> Subject: Re: [Full-disclosure] Getting Off the Patch
> 
> While this idea may work in small shops, it won't scale to large ones.
> There are something like 800 heterogeneous servers where I work.  Small
> clusters of like-purpose servers are allocated to hosting many
> different
> processing components that make up the enterprise architecture.
> Applying
> purpose-specific hardening is a goal, but one that is extremely
> difficult
> to achieve and then maintain.  And at the end of the day if you have a
> server cluster hosting MS-SQL or Oracle or Apache or IIS or whatever,
> AND
> only the necessary listening services are on, AND there is filtering to
> allow specific source and destination traffic, IF there's an identified
> vulnerability in any of those available services the machines must be
> patched to mitigate system and data risk.
> 
> Even with services/daemons/etc. that aren't used and have been
> disabled,
> you can't rely on them remaining that way.  Some newly installed
> component
> could require starting them up, or some Sys-Admin could make a
> configuration mistake and start up some vulnerable service(s).  So if
> there is software installed on a system and that software has a known
> vulnerability and an available patch, any smart resource owner is going
> to
> mandate that the patch be applied to mitigate "potential" risk.  If
> they
> don't and the system and/or data is compromised, that resource owner
> might
> have a hard time explaining how due diligence was exercised to absolve
> themselves and the organization of any data breach or service delivery
> liability.
> 
> As for having to spend a lot of cycles testing patches, those days of
> half
> of the patches being applied breaking something are long gone.  The
> risk
> still exists, and maybe one or two out of every hundred operating
> system
> or core software patches does break something.  Vendors have gotten a
> LOT
> better about releasing reliable patches.  I say this as an InfoSec
> engineer who has been playing this patching game for 20 years.  But
> what
> about that small percentage of patches that does break something?  For
> mission-critical servers any organization worth its salt has a Dev, QA,
> and Production server environment.  You roll out the patches to Dev,
> and
> make sure nothing breaks while the developers are working daily in that
> environment.  Then you roll to QA and have someone test any app that
> could
> potentially be impacted by the patch(es) deployed.  By the time you
> roll
> the patches to Production, the risk of an outage is almost nil.  And
> for
> the workstation environment, create a pilot group for patch
> deployments.
> Deploy patches to their machines, see if anything breaks, and if
> nothing
> does you then deploy the patches safely to the entire organization.
> 
> As for the cost of deploying patches and the time it takes, automated
> patching tools are quite mature and robust these days.  It takes a
> security administrator, server administrator, or desktop administrator
> mere minutes and a few mouse clicks to deploy patches to hundreds or
> thousands of machines.
> 
> The other side of this patching coin is being audited.  Many
> organizations
> are mandated to have independent security audits of their
> infrastructure
> performed.  Those organizations and others may also have business
> partners
> who want audit verification of how vulnerabilities are being mitigated.
> And where an independent audit report shows that an organization isn't
> applying patches for countless vulnerabilities on scores of systems,
> you
> can bet that the concept and practice of patching will be embraced very
> soon thereafter.
> 
> Just for clarity I'm not saying the proposed idea has no value.  I'm a
> big
> fan of system hardening via various means.  If you're not running a
> vulnerable service or it's not available to untrusted machines or
> users,
> the chances of it being compromised are obviously diminished greatly.
> But
> you shouldn't rely on that situation remaining static, and the smart
> move
> is to patch vulnerable software or remove it from the system altogether
> if
> it isn't needed.  Obviously removal isn't an option when it comes to
> operating systems.  You could replace them with some B1 certified
> security
> level system, but you're not going to be able to run a lot of common
> business apps successfully on such an architecture.  And even if you
> could
> those apps could have vulnerabilities and need to be patched.
> Sandboxing
> has value, but it doesn't supplant patching in my professional opinion.
> 
> I do know a way to do away with patching - have software developers
> stop
> writing crappy code that doesn't do good input validation (cough).  Of
> course that is a nirvana not likely to be seen in our lifetimes.
> 
> Wow, did I just write an article damn near equal in length to the
> InfoSec
> Island one posted that started this thread?  Either I have free time to
> spare or I'm really into the concept of patching known vulnerabilities.
> Unfortunately for me it's the latter.
> 
> Peace,
> Vic
> 
> _______________________________________________
> Full-Disclosure - We believe in it.
> Charter: http://lists.grok.org.uk/full-disclosure-charter.html
> Hosted and sponsored by Secunia - http://secunia.com/

_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ