lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Thu, 06 Nov 2003 13:48:07 -0600
From: Paul Schmehl <pauls@...allas.edu>
To: "Steven M. Christey" <coley@...re.org>, thor@...x.com
Cc: bugtraq@...urityfocus.com
Subject: Re: RE: Six Step IE Remote Compromise Cache Attack


--On Wednesday, November 05, 2003 8:27 PM -0500 "Steven M. Christey" 
<coley@...re.org> wrote:
>
> Maybe I'm alone in this, but I find web browser bugs like these to be
> among the most complex and difficult-to-understand vulnerabilities
> that get reported.  An aspect of that complexity often seems to
> involve crossing several intended security "boundaries" in the
> process, taking advantage of design choices that, by themselves, don't
> seem to be that security-relevant.  Example: one might think that
> non-random locations for software components would be a good thing,
> but it's a factor in a number of web client bugs.  (Another aspect of
> that complexity comes from advisories that simply include exploit code
> using obscure components or elements but don't suggest where the issue
> actually lies, but that's a different matter.)
>
But isn't this crossing of security boundaries essentially caused by the 
same mental error that buffer overflows are caused by?  Trusting 
untrustable input is at the foundation of each, isn't it?

If you create a boundary that says, "This is private space.  Only trusted 
data can enter.", yet you decide that, for whatever supposedly legitimate 
reason you want to allow input from some other space, isn't it incumbent 
upon you as the programmer to disallow all but "proper" input?

It appears to me that this chaining of weaknesses is nothing more than an 
extension of the same problem that each weakness has individually, i.e. the 
failure of the programmer to do "bounds" checking.  Granted, it's more 
complex to figure out how to exploit the weaknesses, but the reason the 
exploit is possible is because of the same naive trust that fails us every 
time.

We need a paradigm shift in programming from "allow all but the known bad" 
to "disallow all but the known good", don't we?

Paul Schmehl (pauls@...allas.edu)
Adjunct Information Security Officer
The University of Texas at Dallas
AVIEN Founding Member
http://www.utdallas.edu


Powered by blists - more mailing lists