lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <OF135C766E.2CBF7808-ON88256E23.004D602F-88256E23.0054B046@symantec.com>
Date: Thu, 22 Jan 2004 07:24:53 -0800
From: Oliver Friedrichs <oliver_friedrichs@...antec.com>
To: Eric Rescorla <ekr@...m.com>
Cc: bugtraq@...urityfocus.com
Subject: Re: Paper announcement: Is finding security holes a good idea?


Eric, some thoughts (this is not an argument for or against 
full-disclosure, please take it as constructive criticism):

After reading your paper I agree that the data you used may support your 
arguments, however, you missed some important points.  You don't take into 
account the "type" of vulnerabilities being found in each of the 
applications that you've analyzed (you address Severity, but thats a 
seperate variable).  I would argue that if you did that, you would come to 
some different conclusions.  I would also argue that the caliber of bugs 
being found has increased quite substantially.  Easy to find, easy to 
exploit vulnerabilities have for the most part been exhausted in 
applications that have been "sufficiently" scrutinized.  The bugs being 
found today (in applications that have had a sufficient amount of 
scrutiny) are significantly more complex than a number of years ago (in 
the same application).  In the early 1990's, you could find a common 
buffer overflow (using a blatant strcpy/strcat) in many common core 
Internet applications.  Today, the vulnerabilities being discovered in 
these same applications are more complex off-by-one, signed/unsigned 
integer, compilar casting, or byte/character processing problems.  Some 
good examples of this behaviour are sendmail and BIND.  "Type" also infers 
the skill level required for a researcher to find a given vulnerability, 
due to it's difficulty, with some types being easy to find, while others 
extremely difficult.

Entirely new classes of vulnerabilities are rarely discovered (but they 
still are).  There will only be a finite quantity of these, and once those 
have been identified, the bar can't go any higher.  Of course there can 
only be a finite number of vulnerabilities in any given application as 
well.

Another two metrics that you don't measure is the amount of scrutiny that 
a particular application has had, or the size of an application.  A large 
application that has had 30 vulnerabilities found in it by one researcher 
over 10 years cannot be compared to a small application that has had 30 
vulnerabilities found in it by 200 researchers in 1 year.  There were 30 
vulnerabilities found in both, but the latter application will have 
improved in quality quite significantly, while the former not (assuming 
that both applications have the same average number of vulnerabilities per 
line of code).  Unfortunately scrutiny is likely something that you cannot 
measure in many applications.

So, on average I would argue that software quality (in terms of 
vulnerabilities being discovered) has improved for a given application 
that has, and continues to be, sufficiently scrutinized (not including 
substantial updates that introduce new bugs).  You simply don't have all 
of the data points to prove it, and therefore may be missing important 
conclusions.  I may be able to take specific applications, where we have 
sufficient visibility into scrutiny, size, and type/difficulty of 
vulnerabilities, and prove your theory wrong (sendmail is a possibility).

My conclusion isn't based on the numbers, but simply on my experience with 
researching vulnerabilities since the early 1990's.

Oliver Friedrichs
Sr. Manager - DeepSight
Symantec, Inc. (650) 381-8045

> Bugtraq readers might be interested in this paper:
> 
>                    Is finding security holes a good idea?
> 
>                              Eric Rescorla
>                    RTFM, Inc.   <http://www.rtfm.com/>
> 
> A large amount of effort is expended every year on finding and patching
> security holes. The underlying rationale for this activity is that it
> increases welfare by decreasing the number of bugs available for
> discovery and exploitation by bad guys, thus reducing the total cost of
> intrusions. Given the amount of effort expended, we would expect to see
> noticeable results in terms of improved software quality. However, our
> investigation does not support a substantial quality improvement--the
> data does not allow us to exclude the possibility that the rate of bug
> finding in any given piece of software is constant over long periods of
> time. If there is little or no quality improvement, then we have no
> reason to believe that that the disclosure of bugs reduces the overall
> cost of intrusions.
> 
> The paper can be downloaded from:
> http://www.rtfm.com/bugrate.pdf
> http://www.rtfm.com/bugrate.ps
> 



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ