lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: 22 Jan 2004 13:04:06 -0500
From: "Christopher E. Cramer" <chris.cramer@...e.edu>
To: Eric Rescorla <ekr@...m.com>
Cc: bugtraq@...urityfocus.com
Subject: Re: Paper announcement: Is finding security holes a good idea?


Eric,

I would tend to agree with the other critiques of the paper and would
include one more point.  In your analysis of p_r (probability of
rediscovery), you assume its value to be very low, if not zero, for most
bugs (on a side note, vulnerabilities would be a much better term
here).  This is derived by noting that the number of bugs is very large
(IEEE studies indicate # of bugs (though not necessarily
vulnerabilities) to be anywhere from 1/10th to 1/100th of the number of
lines of code) and that the number of discovered bugs is low.  One thing
that this ignores is that the discovery of vulnerabilities is often
clustered as new classes of security issues are discovered.  

For example, over the past two years, we've seen an explosion in the
number of cross site scripting vulnerabilities being reported.  This
occurred (and continues) as the community recognized new types of XSS
vulnerabilities and tested these techniques against older software
packages.  Another example is the increasing number of "off by one"
reports as researchers (both black and white hatted) search through
applications for code that might be impacted. 

Of course, that theory breaks down somewhat when considering the
classics (like buffer overflows).  However, since the discovery of some
classes of vulnerabilities are clustered, it might be interesting to
search through ICAT to see if there is a temporal correlation in the
types of bugs discovered and to see if you can factor that into your
paper.

-c

--
Christopher E. Cramer, Ph.D.
University Information Technology Security Officer
Duke University,  Office of Information Technology
PGP Public Key: http://www.duke.edu/~cramer/cramer.pgp


On Wed, 2004-01-21 at 18:41, Eric Rescorla wrote:
> Bugtraq readers might be interested in this paper:
> 
>                    Is finding security holes a good idea?
> 
>                              Eric Rescorla
>                    RTFM, Inc.   <http://www.rtfm.com/>
> 
> A large amount of effort is expended every year on finding and patching
> security holes. The underlying rationale for this activity is that it
> increases welfare by decreasing the number of bugs available for
> discovery and exploitation by bad guys, thus reducing the total cost of
> intrusions. Given the amount of effort expended, we would expect to see
> noticeable results in terms of improved software quality. However, our
> investigation does not support a substantial quality improvement--the
> data does not allow us to exclude the possibility that the rate of bug
> finding in any given piece of software is constant over long periods of
> time. If there is little or no quality improvement, then we have no
> reason to believe that that the disclosure of bugs reduces the overall
> cost of intrusions.
> 
> The paper can be downloaded from:
> http://www.rtfm.com/bugrate.pdf
> http://www.rtfm.com/bugrate.ps



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ