[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <21bcc0400707201143j16c92687tef9b05ff376e26a@mail.gmail.com>
Date: Fri, 20 Jul 2007 14:43:16 -0400
From: "Aaron Katz" <atkatz@...il.com>
To: bugtraq@...urityfocus.com
Subject: Re: Internet Explorer 0day exploit
Ohwait - it wasn't Kerkhoff's assumption. Sorry, I wasn't thinking
when I wrote that. It's "by definition".
On 7/20/07, Aaron Katz <atkatz@...il.com> wrote:
> > Exactly. Why is it that many people seem to agree that it's less likely
> > that something bad will happen if information is not disclosed.
>
> This is the classic argument between open and closed source, and full
> and delayed disclosure. And it all boils down to one thing: there is
> no evidence, either way. When we can start looking at reviewed and
> reproduced scientific studies that indicate what *really* happens when
> a vulnerability is fully disclosed, versus what happens when a
> vulnerability is kept quiet until the manufacturer is able to fix it,
> then we'll be able to have a good conversation. Until then,
> *everything* is theoretical.
>
>
> > There is likely more "good" people out
> > there than "bad". If x % of the good guys look at it, they will likely
> > count for a higher number of people as compared to an equal % x of the
> > bad.
>
> But there are the classic problems with this approach. We don't know
> what percentage of machines will be instantly protected. We do not
> know how many machines will be protected only once the vendor offers a
> patch. We do not know how many machines will never be protected.
>
> We do know that the bad guys now know about the exploit (Kerkhoff's
> assumption), and that there is *some* number of machines that s/he can
> now exploit, which s/he could not have exploited if (a) s/he didn't
> discover, or otherwise know about the vulnerability, and (b) the good
> guy who discovered the vulnerability had waited for the vendor to
> release a patch.
>
> But, the classic problem with the argument I'm offering is that we
> don't know if the attacker has found the vulnerability already, and
> that full disclosure gives the good guys the opportunity to protect
> themselves.
>
> But we simply don't know the numbers, or even have any reasonable
> guesses (not just based on feelings, but based on hard evidence). How
> many good guy swill protect themselves when full disclosure is made?
> How many bad guys will suddenly know about an attack that they didn't
> know about, before?
>
>
> Oh, and someone mentioned something about partial disclosure - state
> "product Y has a vulnerability in location X, but I'm not telling you
> how to exploit it". 10 years ago, I saw this a lot (I haven't read
> bugtraq for 10 years), and, literally within days, there would be an
> exploit posted by another researcher who wanted to make a name for
> himself. Therefore, there is no such thing as "partial disclosure".
> If it's publicly disclosed, even with minimal information, then it
> should be considered fully disclosed.
>
>
> However, with the intent of actually adding something to this
> discussion, rather than going in the exact same conversational circles
> that have been going for more than 10 years.
>
> There are things we can consider, within the each of the realms of
> disclosure, independently, in an effort to weigh the risks. I expect
> I'm not the first to say this, but I think it's more interesting to
> try and investigate the two concepts, completely independently from
> each other, rather than for the strict purpose of comparing them.
> And, if we are ever to have a chance to compare the two types of
> disclosure, we must have a very good understanding of the baggage that
> each type of disclosure entails.
>
> I find it easier to think about delayed disclosure, rather than full
> disclosure, to start.
>
> I imagine it would be reasonable to propose that the complexity of
> both discovering and exploiting a vulnerability would be worth
> considering. The idea, here, would be that a vulnerability that is
> more difficult to find would be less likely found by anyone - good or
> bad. By definition, it's less likely that the bad guy has found it,
> so it's safer to work with the vendor than it is to publicly disclose
> the problem. (Note: "less likely" and "safer" with regards to a more
> easily discovered vulnerability having already been discovered by a
> bad guy, not with regards to full disclosure being better or worse).
>
> At the same time, it would seem reasonable to propose that the
> popularity of the particular product should have bearing, as well. It
> seems well accepted that a more popular product will have more bugs
> (and more vulnerabilities) discovered in it, both by the good guys and
> by the bad guys. Therefore, if a vulnerability is discovered in a
> less popular product, it is again probably safer to keep it quiet
> until the vendor supplies a patch. (Note: safer with regards to a
> more popular product, not with regards to full disclosure being better
> or worse).
>
> --
> Aaron
>
Powered by blists - more mailing lists