lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0710110824040.5639@localhost.localdomain>
Date: Thu, 11 Oct 2007 08:37:47 -0400 (EDT)
From: gboyce <gboyce@...belly.com>
To: "pdp (architect)" <pdp.gnucitizen@...glemail.com>
Cc: "Thor (Hammer of God)" <thor@...merofgod.com>,
	full-disclosure@...ts.grok.org.uk, bugtraq@...urityfocus.com
Subject: Re: [Full-disclosure] Remote Desktop Command Fixation Attacks

On Thu, 11 Oct 2007, pdp (architect) wrote:

> Thor, with no disrespect but you are wrong. Security in depth does not
> work and I am not planning to support my argument in any way. This is
> just my personal humble opinion. I've seen only failure of the
> principles you mentioned. Security in depth works only in a perfect
> world. The truth is that you cannot implement true security mainly
> because you will hit on the accessibility side. It is all about
> achieving the balance between security and accessibility. Moreover,
> you cannot implement security in depth mainly because you cannot
> predict the future. Therefore, you don't know what kinds of attack
> will surface next.
>
> Security is not a destination, it is a process. Security in depth
> sounds like a destination to me.

The reason for security in depth is precisely because no security controls 
are foolproof.  The point isn't to make a system completely unbreakable, 
but to raise the bar for what is required in order to extend their access 
beyond what they already control.

Lets take a webserver as an example.

Your webserver only requires ports 80 and 443 listening to the world, so 
you deploy a firewall in front of it restricting access to just those 
ports.

A default install of the OS may enable a few other processes bound to 
remote ports like a mail server, portmap, etc.  These processes aren't 
needed on this particular system.  The firewall blocks access to them, but 
firewalls aren't perfect.  The attacker may have found a way to get behind 
it.  So you turn off those unneeded services.

Being a webserver, its running a number of web applications.  Since you 
don't want to place more trust in those applications than you have to, you 
chroot apache and have it run as a non-privledged user.  Hopefully this 
will contain a successful compromise.

But still, the attacker may break out of the chroot, so you make sure that 
you remove setuid applications or at least keep them up to date with the 
latest security updates.  You do your best to keep them from becoming 
root.  But even that may fail.

Assuming all else has failed, this system is completely owned.  But you 
have other systems with even more sensitive information.  So you architect 
your network such that this webserver does not have more network 
prilvedges than it needs.  You filter outbound network connections to 
hopefully block a good portion of botnet command and control functions. 
You block access from this webserver to other systems unless they have a 
need to talk to them.  You implement application level firewalls between 
it and services that it does need to talk to.

THIS is defence is depth.  Its not about perfect security.  Its about 
containing breaches.  Its about blocking unnecessary risks.  Its about 
making sure that a small mistake that you make does not hand over the keys 
to the kingdom.

--
Greg

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ