[<prev] [next>] [day] [month] [year] [list]
Message-ID: <232810-2200386924156840@M2W063.mail2web.com>
From: mattmurphy at kc.rr.com (mattmurphy@...rr.com)
Subject: Vulnerability Disclosure Debate
"gridrun" <gridrun@...es.smart-girlies.org> writes:
Matthew Murphy wrote:
<snip>
>Well, I find it pretty incredible that this "inherently dumb program"
>spread so well, then, if it was so worthless and buggy. Can't imagine
>what a *well-written* worm for that bug would have done, then!
>
>You can't imagine? You dont know much about the underground, it seems.
>Btw, it really spread well, yeah... If you consider spreading to the news
>headlines a good thing, yeah. It did very well.
>No wonder you can't imagine. Oh well, after all you think those high
>profile worms make up for the "vast majority" of self propagating code...
>You are living in a dream world, Neo.
>
>most widely known != vast majority
When dealing with viruses, there are two things that are commonly seen:
speed, and survivable spreading. The Slammer worm was an example of the
former case: it spread so fast and used so much bandwidth that it was bound
to reach most of the internet, but at the same time was a bit of an
over-aggressive spreader (from the viewpoint of the author). So, while
Slammer did spread very well *initially* (it reached the point of scanning
nearly the entire internet), and reached a point of scanning not seen in
many other worms, it did not aim for survival.
Slammer's failure was that it became a pest. Short-term, it most certainly
did spread very well. Long term, it did not. I do realize that what
gathers media attention are short term, fast spreading worms. Code Red,
Slammer, and even the "media worthy" mass mailing worms all had this
characteristic. My point was that all a well-written version of Slammer
probably would have done differently was to control its spread, and
therefore take considerably longer to be detected. This of course assumes
that a "better slammer" would have no payload. However, worms with
payloads add to detection, making them less successful.
The vast majority of malicious code doesn't make the headlines, and I
realize that. From the viewpoint of an author, making the headlines is a
failure -- as it means the worm has been discovered on many systems.
However, the flash in the pan spread of Slammer was the only serious damage
it caused to internet services -- due to incredibly excessive service
usage. So, it was inherently more damaging when poorly written than it
probably would have been if written well.
[snip]
>>Now, you're embarrassing yourself. Crackers, and etc. don't want credit
>>from the vast majority of the list readership (generally speaking,
>>anyway), and could care less about what we say. Also, some realize that
>>the act of breaking into a system under the laws of most countries is
>>illegal, and don't want to draw attention to themselves by publishing the
>>code they used to do it.
>
>You cannot imagine how a well-written worm would behave, but you claim
>to know what crackers want. You, sir, are contradicting yourself. Besides,
>it was exactly my point that most exploits remain 0day anyways.
Saying that *some* exploits remain in the underground is more plausible,
and from what I've seen, more accurate. There's no contradiction at all,
as I said that in a joking tone, realizing that the perfectly undetectable
worm is the worm that does *no damage*.
However, worms with no possibility of damage are mere theory, as spreading
*is* damage in many cases. Even if the spread itself is not directly
damaging, it is noticable deviation from typical behavior, regardless of
how well-controlled it is. Therefore any worm that truly does spread
itself is detectable, and therefore "inherently dumb" by your criterion.
>>To me, full disclosure makes perfect sense. Tell people about the
>>vulnerability as soon as you notice it exists, you'll see
>>"proof-of-concept" code appearing within days - essentially a proof that
>>there were other people knowing about the vulnerability already.
>
>Not even close. While we see PoC code appear in only a few days, that is
>not an indicator of advanced details, particularly if the product is widely
>deployed, as you can start exploit development in a matter of minutes after
>receiving the first details, if in a position to do so (i.e, you have a box
>in front of you to test).
>
>You still believe vulnerabilities are not found until someone at (insert
>name of big money sec company here) notices them, then you are way off.
*knock, knock*
Not the case at all -- I was just disputing a logical flaw. The appearance
of PoC code mere days after the announcement does not *necessarily* mean
that the author of such code had prior notice, although this is *sometimes*
the case.
>>Also, full disclosure, including exploit code, frees you from the
>>obligation to believe in software vendor advisories and patches -
>>another critical issue, demonstrated again by the RPC/DCOM flaw:
>
>Exploit code *does not* solve the problem. I can do just as well by
>providing no code, and just being descriptive with my details, as I can by
>providing code. I've provided code with some advisories; this is not a
>practice I engage in any longer. It really speaks poorly for the writing
>capabilities of the discoverer if they are incapable of offering sufficient
>detail to at least reproduce the flaw without providing exploit code.
>Exploit code, while it can conclusively prove that the vulnerability exists
>in a particular config, is not 100% accurate (offsets can be bad, for
>instance), and this can even create a false sense of security. Further,
>you don't get any solution by running an exploit.
>
>Descriptive like "There exists a problem in the way XYZ handles FUBAR
>requests. The vulnerability
>can be exploited remotely. Patches are available; apply immediately." ?
>mmkay...
>I share your point of view about the false sense of security tho.
>Perfectly valid point.
No, descriptive like this, for, say a web server: "X product has a buffer
overflow in its handling of GET requests; a URI of 2048 characters causes
the server to crash due to a fatal exception."
This allows the author to telnet into his httpd, send it a GET with 2048
characters, and see what happens. However, if he wants an exploit, he has
to write it himself. You could even provide a script that automates the
testing of the loopback for the exploit to the point of causing a crash.
This requires that the reader (assuming a hostile exploit is desired) be
reasonably skilled with things like stack handling, return addresses, etc.
to get anything done. However, this is only effective if PoC code does not
appear that enables malicious actions. This is a goal that many people say
they support and then disregard.
>>>To me, it makes no sense to blindly trust in a software vendor's patch,
>>>when it has repeately been shown that software vendor's patches often do
>>>not fully provide the anticipated security fixes.
>>
>>And exploit code, of course, fills that gap, right? You are talking about
>>two different things here. MS03-026 certainly does mitigate the
>>vulnerability at hand. Also, you must remember that vendor patches are
>>only designed to protect against vulnerabilities that immediately impact
>>the system being patched.
>
>Which part did you not understand? Failure of the RPC/DCOM patch to
>effectively address the vulnerability was discovered only when end users
>ran * E X P L O I T C O D E * against their own, patched servers. It
>might not give you a solution to the problem, but at least *you know if
>the problem still exists*
This is not only extreme, but factually incorrect, to my knowledge. The
efficacy of the MS03-026 patch has not been disproven to my knowledge, when
it was properly applied.
>>In today's environment where every new vulnerability is a time bomb, we
>>must balance the public's need to know with its requirement for suitable
>>solutions.
>>
>And who should balance? You?? After all, you are the "public". Unless
>your on someone's payroll to post anti-FD FUD here, that is.
Nope, not on any payroll, nor am I posting "FUD". In fact, I'm speaking as
a member of the public. I am owner of 2 systems, both with Windows XP
Professional (SP1) on them. When the vulnerability was announced, one was
immediately patched, while I did not have immediate access to the other (I
was on vacation). By the time the exploits had appeared, I still had not
had the chance to rectify that situation. The system is now patched, but
this is a problem that we must realize.
I fully agree with your right to test the validity of patches. However, in
the case of a buffer overflow vulnerability (especially one of this
severity), it is unnecessary to post a full reverse shell binding exploit
for every script kiddie in the world to prove that a patch is effective.
Had it stopped at the point of DoS code (which I believe BenJurry posted),
that was perfectly effective to test the patch. The other exploits, and
things like universal offsets, etc. are excessive and make things much
easier for potential kiddies.
--------------------------------------------------------------------
mail2web - Check your email from the web at
http://mail2web.com/ .
Powered by blists - more mailing lists