lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Thu, 6 Nov 2003 23:18:35 +0100
From: Florian Weimer <>
To: Tyler Larson <>
Cc: white colin john <>,
Subject: Re: Six Step IE Remote Compromise Cache Attack

Tyler Larson wrote:

> In fact, I'd go as far as to say that in dealing with any open-source
> software at all, development of a POC exploit is never necessary.

In some cases that's true.  But if it's a remotely exploitable hole that
doesn't require prior authentication of the attacker and it affects
widespread software, I'd rather like to see a proof-of-concept exploit.

Why?  Even if the admin in charge has applied a patch (or a
configuration change), there's a non-zero probability that something
goes wrong: the service is not restarted, the wrong version of the
package has been installed, the file system contains another copy of the
software which is the one that is actually running, the patch lacks a
hunk and is ineffective, the system compiler optimized away the new
security check, and so on.

If you look at the recent DCOM scanners, you'll notice that they work
without PoC code.  Microsoft implemented some unrelated changes, and the
difference in behavior is externally observable.  If you have got the
source code, you usually apply minimal patches which often don't even
change the version number (distributions such as Debian and SuSE have a
policy to backport just the isolated security fixes).  That's why remote
probing almost always requires an exploit of some kind.  Not a complete
exploit, with an extensive collection of offsets and shellcode for
various platforms, but some proof of concept, so that we are able to
check if we are in doubt.

> It's generally easier (and always more productive) to develop the fix
> than the POC anyway.

Sometimes this is true.  A deterministic proof-of-concept exploit for
the recent ircd bug was quite hard to construct, fixing it was much
easier (I had applied a fix even before I fully understand the problem
and upstream agreed that there was in fact a problem 8-).

> And if a vendor patch exists, POC code won't help anybody. Why?
> Because the existance of a security patch already adequately
> demonstrates the existance of a potential threat.

I agree that exploit code is less helpful for threat assessment.  After
all, you don't know if the exploit is representative for all methods of
attack, and if it demonstrates the maximum possible impact.

However, as outlined above, there are other uses for exploit code.  Of
course, it doesn't really apply to passive attacks on clients such as
web browsers (but I presume that the discussion has slipped a bit 8-).

Powered by blists - more mailing lists