lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <006a01c35d35$39db7780$0c351c41@basement>
From: mattmurphy at kc.rr.com (Matthew Murphy)
Subject: Vulnerability Disclosure Debate

> The security alliance around Microsoft is trying to push its "reasonable
> vulnerability disclosure guidelines", which seeks to prevent security
> researchers from publishing proof-of-concept code alltogether, and wants
> them to make only limited, next to useless, information about security
> flaws available to the public.

Oh my god, this is news!  How many weeks has it been since OIS released its
draft, to this very list?

> In my humble, personal opinion, this step seeks to maximize income of
> several large security firms, as they would release any detailed
> information only to paying groups of subscribers... An inherently
> dangerous plan, and the argumentation behind it is severely flawed.

The flaw is not so much in the idea of limiting destructive code as it is
the people who have the details.  I personally think that the day of
monopoly when it comes to information and tools in security will never
arrive, particularly not when the monopoly is led by worthless,
disrespectable companies like ISS.

You only have to look at the countless errors in ISS official advisories --
such as saying that millions of insecure Apache boxes that were vulnerable
to existing underground exploits could not be exploited.  But it gets
better, when a distribution server is compromised, ISS claimed it was a
honeypot.  Even funnier though, is that ISS adds that none of their
"production servers" were compromised.  So, servers offering trial software
aren't important?  Had the distribution binaries been modified, ISS may well
have been bankrupted by customer lawsuits for negligence.  Mistakes do
happen, but that is just disgraceful; it's either a really bad cover-up, or
a show of ISS' true colors when it comes to customer security.

> They state that those releasing proof-of-concept code to the public are
> responsible for the creation of various malware, virii and worms,
> exploiting the discovered vulnberabilities.
> Let me tell you one thing: If you believe that you are the only ones
> finding vulnerabilities, then you are to be considered a bunch of
> arrogant, self deceited stupid ignorant bitches. Do you really think you
> are the only ones "31337" enough to find sec vulns??? Latest example:
> The people here at spacebitch.com noticed intrusions using the RPC/DCOM
> vulnerability at least a month before any information about it was
> published at all.

Sure ya did -- how many of us should believe that?  And I assume, of course,
that you notified Microsoft of the exploit immediately, right?

As for virus/worm authors and how they find bugs to exploit, if you had any
background here, you would have realized by now that the vast majority of
self-propagating code targets vulnerabilities where working exploit code is
available.  Code Red, Nimda, Slammer, and Spida all fit this criterion.
While nobody can say for a fact that no virus writer has ever found his own
hole, we *can* say that trends and patterns in self-propagating code prove
that the creation of such code is sped up significantly when exploit code is
public.

Code doesn't help many people, because it has comparatively little security
value.  Posting code to the public allows every script kiddie in the world
an almost instant compromise of a large number of vulnerable boxes in a case
such as this.  I am all for exploit details like, "after x bytes in a packet
with x header components, the service does [insert failure action here]...".
Offering pluggable target offsets just makes it easier for worm authors and
script kiddies.

>Now that its published, everyone goes BIG NEWS about
> it, and predicts the advent of the next "internet destroying" worm which
> will take over all our systems. It doesnt matter to these people, that
> the most effective worms and trojans are far more low profile then for
> example "slammer" worm was (an inherently dumb program, raising
> immediate attention just by the exorbitant amount of bandwidth consumed
> by it). They dont even mention that there are so many worms and trojans
> making their ways thru cyberspace, mostly undetected and unnoticed,
> spreading slowly and in a limited manner only.

Well, I find it pretty incredible that this "inherently dumb program" spread
so well, then, if it was so worthless and buggy.  Can't imagine what a
*well-written* worm for that bug would have done, then!

> Hackers, Crackers and Script Kiddies alike are known to engage in
> exploit trading and often, they are discovering and exploiting
> vulnerabilities without going BIG NEWS about it... Do you really
> believe, people are sending all their 0day to @stake & co in advance,
> just to let them make money of the news?? Would you not rather believe
> that crackers finding new vulnerabilities would keep them 0day as long
> as possible, exploiting them undiscovered, because the majority doesnt
> even know the hole exists?? To me, it would seem perfectly logical for
> hackers and crackers alike to ONLY publish their findings after the
> problem was initially noticed by the public? Would it not make sense to
> you? To keep 0day for fun and profit as long as possible, and then
> releasing a modified variant of the 0day as "proof-of-concept" code, as
> soon as the public is noticed, and credits and publicity are to be
> gained by releasing the exploit code to the public?

Now, you're embarrassing yourself.  Crackers, and etc. don't want credit
from the vast majority of the list readership (generally speaking, anyway),
and could care less about what we say.  Also, some realize that the act of
breaking into a system under the laws of most countries is illegal, and
don't want to draw attention to themselves by publishing the code they used
to do it.

> To me, full disclosure makes perfect sense. Tell people about the
> vulnerability as soon as you notice it exists, you'll see
> "proof-of-concept" code appearing within days - essentially a proof that
> there were other people knowing about the vulnerability already.

Not even close.  While we see PoC code appear in only a few days, that is
not an indicator of advanced details, particularly if the product is widely
deployed, as you can start exploit development in a matter of minutes after
receiving the first details, if in a position to do so (i.e, you have a box
in front of you to test).

> Also, full disclosure, including exploit code, frees you from the
> obligation to believe in software vendor advisories and patches -
> another critical issue, demonstrated again by the RPC/DCOM flaw:

Exploit code *does not* solve the problem.  I can do just as well by
providing no code, and just being descriptive with my details, as I can by
providing code.  I've provided code with some advisories; this is not a
practice I engage in any longer.  It really speaks poorly for the writing
capabilities of the discoverer if they are incapable of offering sufficient
detail to at least reproduce the flaw without providing exploit code.
Exploit code, while it can conclusively prove that the vulnerability exists
in a particular config, is not 100% accurate (offsets can be bad, for
instance), and this can even create a false sense of security.  Further, you
don't get any solution by running an exploit.

> Apparently, M$' fix doesnt really fix the problem to its full extent,
> and in some cases, is believed to leave machines vulnerable to the
> attack. Again, something which was to be discovered by END USERS loading
> proof-of-concept exploits and trying them on their own systems. To me,
> it makes no sense to blindly trust in a software vendor's patch, when it
> has repeately been shown that software vendor's patches often do not
> fully provide the anticipated security fixes.

And exploit code, of course, fills that gap, right?  You are talking about
two different things here.  MS03-026 certainly does mitigate the
vulnerability at hand.  Also, you must remember that vendor patches are only
designed to protect against vulnerabilities that immediately impact the
system being patched.  In a perfect world, ports 135, 139, and 445 would be
completely blocked by every machine that was connected to the internet, and
be used for LAN services only (the intended purpose of these services).
This would effectively make patch installation a moot point, as well as
preventing exploitation of any future RPC-related vulnerability.

> Obviously, time has NOT yet come to say goodbye to full disclosure, and
> doing so would leave end users at the fate of some sotware producers'
> industry consortium to take care of OUR security - which they have
> repeatedly shown to be incapable of.

This depends on how you define Full Disclosure.  I strongly believe that
details of vulnerabilities I find should be made available to the public.
This is how I define Full Disclosure.  Most security researchers today have
adopted the more rational viewpoint that Full-Disclosure does not require
exploit code, as it has been proven many times (and will continue to be
proven) that exploit code does far more damage than good.  I also feel that
those who require that vulnerabilities be disclosed immediately (or after
some other short period), are harming the concept.  The idea of Full
Disclosure is that the public has the best opportunity for remedial action;
this usually includes vendor fixes.

In today's environment where every new vulnerability is a time bomb, we must
balance the public's need to know with its requirement for suitable
solutions.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ