lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Thu, 13 Nov 2003 18:04:02 -0500 (EST)
From: "Steven M. Christey" <>
Subject: Re: Funny article

It would be very interesting to see any results that try to compare
the timeliness of vendor response.  I attemped to conduct such a study
a year and a half ago, but the study failed due to lack of time and a
lot of other factors such as:

 - the relatively small percentage of disclosure timelines as reported
   by researchers - and the number of errors or uncertain values in
   some of those timelines (e.g. "I reported this some time ago").  As
   a result of this, I was only able to determine a
   "notify-to-publish-to-fix" timeline about 10% of the time.

 - the relatively large number of bugs that get publicly reported, but
   do not seem to be acknowledged or fixed by the vendor(s).  Since
   there is no publicly known fix, one can only calculate a MINIMUM
   amount of time-to-fix.

 - the implicit (or explicit) policy that the vendor uses related to
   bug severity.  For example, a minor information leak may be treated
   as a non-security bug whereas a remotely exploitable buffer
   overflow would get an instant fix.

 - the unknown percentage of bugs that were discovered and fixed by
   the vendors themselves.  (Indeed, Microsoft's own acknowledgement
   policy makes it difficult to know whether their uncredited fixes
   are due to their own internal discoveries, or previous public
   disclosures by researchers who did not fully coordinate with

 - how one "counts" the number of vulnerabilities.  I view this as one
   of the main roles of CVE, however any set of vulnerability data
   must be normalized in *some* fashion before being compared,
   otherwise the stats will be biased.

 - how one determines the date when something is "fixed."  For
   example, consider if the vendor releases a patch that is so broken
   that it prevents correct operation of the software.  Or consider if
   an open source patch is made available at the time of disclosure
   and posted to "semi-public" developer-focused lists, but there's
   some amount of time before the fix makes it into the official

I initially tried to cover a 3 month time span, but it really seemed
like at least a year's worth was required.

There were some indications (not statistically proven) that
researchers were generally more willing to coordinate with open source
vendors than closed source.  This would further bias any disclosure

You can't simply compare published advisories against each other,

  - different vendors have varying criteria for how severe a bug must
    be before an advisory is published

  - some advisories report multiple bugs, which could mean multiple
    disclosure and notification dates, and different times-to-fix

  - sometimes an interim patch is provided before the advisory

  - sometimes security issues are patched through some mechanism other
    than an advisory (e.g. Microsoft's service packs, which fix
    security bugs but don't normally have an associated security

  - sometimes there are multiple advisories for the same bugs (SCO and
    Red Hat immediately come to mind)

You also can't directly compare by "total bugs per OS" because of the
variance in packages that may or may not get installed, plus how one
defines what is or isn't part of the "operating system" as mentioned
previously.  One way to normalize such a comparison is to compare
"default" installations to each other, and "most secure" installations
to each other - although of course the latter is not always available.

Fortunately, the percentage of vulnerability reports with disclosure
timelines seems to have been increased significantly in the past year,
so maybe there is a critical mass of data available.

As a final note, I have the impression that most vendors (open or
closed, commercial or freeware) don't track their own speed-to-fix,
and *no* vendor that I know of actually *publishes* their

Hopefully someday there will be a solid public paper that actually
tries to quantify the *real* amount of time it takes to fix bugs,
whether on a per-vendor or per-platform basis, and accounts for all
the issues that I described above.  (I know of a couple private
efforts, but they are not public yet.)  Of course, one would want to
verify the raw data as well.

- Steve

Powered by blists - more mailing lists