[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070720210852.GC59063@demeter.hydra>
Date: Fri, 20 Jul 2007 15:08:52 -0600
From: Chad Perrin <perrin@...theon.com>
To: bugtraq@...urityfocus.com
Subject: Re: Internet Explorer 0day exploit
On Wed, Jul 18, 2007 at 10:12:11PM +0200, Chris Stromblad wrote:
>
> Zow Terry Brugger wrote:
> >> What exactly constitutes a 0day? From my perspective naming a
> >> vulnerability 0day have absolutely no value whatsoever, it just doesn't
> >> make any sense. 0day for who? The person who release it, sure, but for
> >> the security community as a whole... nah.
> >
> > I consider a "0day" to be a vulnerability for which there is an exploit in
> > the wild before there's a vendor patch for the problem. If this convention is
> > followed, it has value to the community, because we know that having that
> > software on our systems presents a significant risk.
>
> Fair point, had not considered that before. It would be better to just
> call it active vulnerability vs inactive vulnerability. Active would
> define something that yet cannot be prevented and has "wild" exploit
> code where a passive vulnerability is something that has been patched
> and _should_ no longer be applicable. Anyways, you make a good point.
Shouldn't that distinction be simply marked by the terms "patched" and
"unpatched"?
My understanding of the term "zero day exploit" is that it refers to a
vulnerability for which there is an exploit "in the wild" or known
publicly, concurrent with knowledge of the vulnerability itself. Thus,
the exploit is out there on "day zero".
The term "zero day vulnerability" just seems to be an error of phrasing,
where "vulnerability" replaces "exploit", and as a result confusion such
as in this discussion arises. Of course, I'm not "in charge" of
maintaining these terms' meaning, so I imagine my opinion of the matter
won't count for much.
> >
> >> One more thing about "advisories". I think it would be better to release
> >> them immediately and let people know what they are facing. With public
> >> dissemination of a vulnerability perhaps someone will release a 3rd
> >> party patch or another inventive way of protecting oneself. Holding it
> >> "secret" really doesn't help anyone. If anything it prevents people from
> >> trying to find a way to fix the vulnerability.
> >
> > First off, I don't think anyone can seriously say it doesn't help _anybody_
> > -- it certainly helps the vendor. If it's an IDS/IPS company that holds the
> > research and they've got a signature out for it on their system, it certainly
> > helps them. Here we find a variation on the ancient (in Internet terms)
> > argument about full disclosure: if bugs are public knowledge, will vendors be
> > more responsive to fixing them? I don't think you're going to see publicly
> > developed patches for any but the most extreme cases. At the same time, I see
> > some advisories where the vendor was notified more than six months ago and
> > just has a patch out now. That's a pretty large window of vulnerability if
> > anyone malicious knows about the problem (and if we're finding them in the
> > open community, there's no reason they wouldn't). I think security
> > researchers need to continue to think about exuding due pressure on vendors
> > to get bugs patched.
>
> Yes it sure helps the vendor, but it certainly doesn't help the end
> user/customer and in my opinion it is he who matters. The vendor has a
> responsibility towards the consumer, not the other way around. By not
> exerting any pressure on vendors, I believe things will never change. It
> certainly doesn't make any sense for a vendor to become proactive in
> making their software more secure. At least not until enough people
> start demanding more secure software...
It looks like the two of you are in violent agreement.
--
CCD CopyWrite Chad Perrin [ http://ccd.apotheon.org ]
John Kenneth Galbraith: "If all else fails, immortality can always be
assured through spectacular error."
Powered by blists - more mailing lists