lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <200412212148.iBLLm1R0016874@linus.mitre.org>
Date: Tue, 21 Dec 2004 16:48:01 -0500 (EST)
From: "Steven M. Christey" <coley@...re.org>
To: bugtraq@...urityfocus.com
Subject: Re: DJB's students release 44 *nix software vulnerability advisories



Besides which packages were found to be vulnerable, it seems like it
would be equally or more informative to know which other packages were
audited and not found to have bugs.  The bulk of the "7500 man-hours"
were probably spent *confirming* the security of some of the software,
and some students may have accidentally selected well-written
software.

It might also be useful to know which individuals looked for which
kinds of vulnerabilities.  For example, one report dealt with buffer
overflows in a long SQL query that was apparently controllable by the
user, but there was no mention of whether SQL injection
vulnerabilities were investigated.  Another mentioned directory
traversal via ".." sequences but didn't mention "/absolute/pathname"
vulns (which could be thought of as the same general issue, except a
lot of software is vulnerable to one but not the other.)

The bulk of the reported issues seem to be classic buffer overflows, a
rate of about 70%, which is rather high compared to the industry-wide
average of about 20% in recent years.  This could suggest an
unintended bias somewhere in the audits, e.g. in the tools or
techniques used by the students, or the types of packages that were
audited.  However, there were insufficient details to know whether
these overflows resulted from newer-style attacks, e.g. by modifying
length fields to be inconsistent with the real length of the input.
And to be fair, the industry-wide figures for publicly reported issues
reflect their own biases.

About 20% of the issues weren't classifiable into well-known
categories, while the remainder fell into 5 categories, a small
percentage of the 20 or so well-known categories.

All that said, I agree that this is an excellent effort, and it would
be great to see even more focused activities like this in the academic
world.

- Steve


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ