lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <cqg9ri$mps$1@abraham.cs.berkeley.edu>
Date: Fri, 24 Dec 2004 05:35:46 +0000 (UTC)
From: daw@...erner.cs.berkeley.edu (David Wagner)
To: bugtraq@...urityfocus.com
Subject: Re: DJB's students release 44 *nix software vulnerability advisories


Steven M. Christey wrote:
>Besides which packages were found to be vulnerable, it seems like it
>would be equally or more informative to know which other packages were
>audited and not found to have bugs.  The bulk of the "7500 man-hours"
>were probably spent *confirming* the security of some of the software,
>and some students may have accidentally selected well-written
>software.

Sadly, that is intractably hard.  As Dijkstra would be quick to remind
us, testing can prove the existence of bugs, but never their absence.
The problem is that confirming the security of any real software package
is far too hard to be done in any class project; it is a task that is
way beyond what is feasible in the time available.

Crispin Cowan tried to put together an effort (Sardonix) to get people
to audit software packages and report on which ones were audited and
confirmed to be secure.  The problem is that this is ridiculously
hard, far harder than trolling for a security hole.  There is also the
sociological problem that we reward people more for finding security
holes than for finding none.

To give you some idea of the cost of finding security holes in modern
software packages, let me relate some experience from the graduate
security class I teach.  In Fall 2002, I gave the students a homework
where I asked them to pick some program of interest to them and spend 3-6
hours auditing its security.  About half of the students found at least
one security hole in the package they had picked.  Because the exercise
was so well received by students, I gave the same homework in the class
this year, and saw about the same ratio.  This is not a very scientific
experiment, but it suggests that if we pick a software package at random,
point a smart person with some security training (not necessarily a
security expert) at it, and let them have a few hours, then about 50% of
the time the software package will turn out to have some security hole.
Obviously, this is only a lower bound on how many of our applications
are insecure.  I will leave you to speculate how many software packages
might harbor security flaws if we gave the auditor more time, training,
and resources.

If you're curious, you can see the homework assignments here:
 http://www.cs.berkeley.edu/~daw/teaching/cs261-f02/homeworks/hw2.html
 http://www.cs.berkeley.edu/~daw/teaching/cs261-f04/homeworks/hw2.html
Also, you can see some remarks I wrote back in Fall 2002 about the audits
people did that semester:
 http://www.cs.berkeley.edu/~daw/teaching/cs261-f02/solns/hw2.html
You can see their audits from Fall 2002 on the Sardonix web site:
 https://www.sardonix.org/Audited.html

Anyway, the point is that with 3-6 hours, you can often find a security
flaw in a real software package -- but confirming the security of
a security package in that amount of time is hopelessly impractical.
Coming to any confidence whatsoever that the package is free of bugs would
take orders of magnitude more time than available in the aforementioned
homework exercises.  It is unfortunate that it is not easier to verify
the security of the software we rely upon, but these are the facts of
life today.

You asked about the relative proportion of different kinds of security
flaws.  Of the audits I saw in my class, most were buffer overruns, and
there were a number of access control flaws (either missing or incorrect
access control measures).  However, I wouldn't read too much into this --
the students only saw in class a few examples of implementation flaws,
so they might not have been looking for other kinds of flaws.  Also, I
asked them to use RATS to help them with part of their audit, and that
tends to direct your attention more towards low-level implementation
defects that RATS can find with simple lexical analysis rather than
higher-level application security issues.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ