lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 29 Oct 2004 09:20:40 -1000 (HST)
From: Tim Newsham <newsham@...a.net>
To: Michael Wojcik <Michael.Wojcik@...rofocus.com>
Cc: Valdis.Kletnieks@...edu, bugtraq@...urityfocus.com,
	David Brodbeck <DavidB@...l.interclean.com>
Subject: RE: Update: Web browsers - a mini-farce (MSIE gives in)


> > > You don't have to understand how to exploit a buffer overflow in
> > > order to avoid overflowing buffers.
> >
> > But you have to think of a buffer being overflowed to check for
> > it.
>
> Anyone who doesn't understand that a finite-size container cannot hold more
> than what it can hold is unlikely to manage to write software.

Michael, the entire premise of your posting is that a programmer
can write secure code without knowing what constitutes exploitable
code.  I'm going to have to strongly disagree.

It might be the case that a programmer recognizes that there is
a bug present when there is a fixed sized container and its possible
to put more into it than it should be able to contain.  Even this
is not always the case.  Heck, I write software and sometimes
there are buffer overflows in my own code that I dont recognize
at first, and I get paid to analyze software looking for problems
like this.

But lets assume that a good programmer is writing software and
it comes to his attention that there is a buffer overflow, or
that user input is not being filtered, or that user input is being
passed to a printf type function.  What happens next?  Well, it
depends on how many bugs there are, how much other work needs
to be done, and very importantly, what the perceived impact of
that bug is.  You cannot imagine how many times a bug is pointed
out and the author of the software says "ok, that bug can only
happen if the user does something stupid, and it is not exploitable.
Lets defer that one."  It turns out that these bugs are exploitable
a lot more often than most non-security people know.  It can even
be hard for people who do this for a living to make the proper
determination without lots and lots of analysis and head banging,
but that's another discussion...

In summary, I think its very important that software writers understand,
at least in some minimal way, what constitutes an exploitable security
vulnerability.  You cannot expect to write code that is free of
security defects without understanding the principles.  (Ok, in fairness
you can't really expect to write code that is free of security
defects even when you understand the principles, but at least it
helps).

> Michael Wojcik
> Principal Software Systems Developer, Micro Focus

Tim N.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ