[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20041223063955.49375.qmail@cr.yp.to>
Date: 23 Dec 2004 06:39:55 -0000
From: "D. J. Bernstein" <djb@...yp.to>
To: bugtraq@...urityfocus.com
Subject: Re: DJB's students release 44 *nix software vulnerability advisories
http://www.cs.umd.edu/~waa/pubs/Windows_of_Vulnerability.pdf summarizes
reports to CERT of intrusions through three particular security holes.
Most intrusions occurred months or years after the holes were disclosed
to the public. (Let's assume that reports to CERT are noticeably
correlated with actual damage.)
Crispin starts from these three examples of intrusions occurring _after_
full disclosure, and---applying the principle ``post hoc, ergo propter
hoc''---leaps to the astounding conclusion that the intrusions were
_caused_ by full disclosure, i.e., that avoiding disclosure would have
prevented the intrusions.
Crispin's conclusion is obviously incorrect. We've all seen reports of
extensive damage caused by attackers exploiting security holes that
_weren't_ publicly known before the attacks. Clearly the attackers are
capable of reading software and finding security holes for themselves.
This isn't rocket science.
There is, by the way, a more subtle problem with the argument against
full disclosure: the argument focuses entirely on short-term effects and
ignores long-term effects. But the basic problem with the argument is
that it's out of whack with reality. If you think that hiding security
information keeps us safe, you're deluding yourself.
---D. J. Bernstein, Associate Professor, Department of Mathematics,
Statistics, and Computer Science, University of Illinois at Chicago
Powered by blists - more mailing lists