lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <200611272214.kARMEhnd016673@faron.mitre.org>
Date: Mon, 27 Nov 2006 17:14:43 -0500 (EST)
From: "Steven M. Christey" <coley@...re.org>
To: bugtraq@...urityfocus.com
Subject: Re: Re: "Which is more secure? Oracle vs. Microsoft" (is it a fair comparison?)


Large-scale comparisons using historical data, while suggestive, have
certain limitations.  I touched on many of these in my open letter on
the interpretation of vulnerability statistics [1] when talking about
trend analysis in vulnerability databases, but many of the points
apply here.

For example, there appears to be distinct difference in editorial
policy between Oracle and Microsoft in terms of publishing
vulnerabilities that the vendors discovered themselves, instead of
third parties.  This might produce larger numbers for Oracle, which
appears to include internally discovered vulnerabilities in their
advisories, whereas this is not necessarily the case for Microsoft
[2], [3].  In both cases, the lack of details can mean that multiple
issues wind up with one public identifier; for example, Oracle Vuln#
DB01 from CPU Jul 2006 (CVE-2006-3698) might involve 10 different
issues, and this is not an isolated case.  This can further muddy the
waters.

Another difficulty that I originally mentioned involved possible
differences in research community bias.  I don't think we really know
how many researchers are looking at which product, what their skill
levels are, and how much effort they're putting into looking.  David
touches on this topic briefly:

  Do the SQL Server 2005 results have no flaws because no-one is
  looking at it?

  No. I know of a number of good researchers are looking at it. SQL
  Server code is just more secure than Oracle code.

It would be nice if the research community could begin to quantify the
level of effort they are putting into their analyses.  If researchers
are putting 10 times more effort into one product than another, then
you might expect to find 10 times more issues (assuming the products
started at the same baseline of quality).

In short, we need better metrics before we can really compare the
relative *inherent* security of products.  (The work from Wing,
Manadhata, and Howard for measuring relative attack surface shows
promise).  However, public stats are the best we have for now.

- Steve


[1] Open Letter on the Interpretation of "Vulnerability Statistics"
http://lists.grok.org.uk/pipermail/full-disclosure/2006-January/041028.html

[2] Skeletons in Microsoft.s Closet - Silently Fixed Vulnerabilities
    Steve Manzuik, eEye
    http://www.blackhat.com/presentations/bh-europe-06/bh-eu-06-Manzuik.pdf

[3] Microsoft Patches: When Silence Isn't Golden
    Ryan Naraine, eWeek
    http://www.eweek.com/article2/0,1895,1951186,00.asp

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ