[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20030728005612.49052.qmail@web11402.mail.yahoo.com>
From: xillwillx at yahoo.com (w g)
Subject: DCOM RPC exploit (dcom.c)
ive noticed ever since i posted to bugtraq and this list my site http://illmob.org has been under attack from ddos... lame
H D Moore <fdlist@...italoffense.net> wrote:On Saturday 26 July 2003 07:16 pm, Chris Paget wrote:
> Personally, I'm tempted to set up my firewall to NAT incoming requests
> on port 135 to either www.metasploit.com or www.xfocus.org. I know
> this is the full-disclosure list, but working exploit code for an issue
> this huge is taking it a bit far, especially less than 2 weeks after
> the advisory comes out.
So you are going to intentionally DoS my web server because you don't
agree with full disclosure? Go right ahead, it will do wonders for the
your own credibility and that of your company. If you are going to make
threats of that nature, at least have the decency to post from a non-work
account. You have just destroyed any moral ground you had to stand on in
regards to exploit disclosure policy. Stick your head back in the sand.
There are a number of reasons why I write and release fully-functional
exploits. The first one is easy; even after xfocus.org released their
exploit code, there still wasn't enough widespread awareness of the issue
to convince many people to patch. Releasing *functional* exploit code to
the public is one of the few methods that actually make a difference when
it comes to improving the overall security of the Internet.
Just because my exploit wasn't available to the public at large until
yesterday doesn't mean there weren't people exploiting systems for the
last two weeks. There are at least four different exploits that I am
aware of that were not based on the xfocus code. These exploits were
already in the hands of the "script kiddies" and they have had free reign
of most of the internet for at least a week or so. I feel that providing
people with an accurate way to determine their risk to a given
vulnerability is a good thing. There will always be people who will use
these types of tools for malicious purposes, but the overall goal is to
motivate people to patch their systems.
The second reason is related to what I do for a living. I work for a
company that provides penetration testing and vulnerability assessment
services. Writing exploits is time consuming and can be fairly difficult.
Since the security industry has started to shy away from releasing even
proof of concept code, there has been a huge gap of available exploit
code that I need to fill to do my job.
While I could keep them all private, I prefer to make them available to
administrators and other security professionals so that they can guage
the risk to the systems they work with. Unfortunately, its not possible
to verify that everyone who asks for a copy of the exploit has the best
intentions in mind. Even if I created a registration system, the code
would still be leaked and it would have zero effect on the availability
of the exploit.
IDS vendors have the same problem, all of the good exploits are private
these days, if they can't capture it in the wild, they can't detect it in
their products. The lack of public exploit code has absolutly no relation
to what most of the malicious types are able to compromise. If they can't
detect an attack, they will more than likely lose a customer.
The recent Samba vulnerability is a perfect example about why full
disclosure is a good thing. I "discovered" the vulnerability while
watching someone crack into an external server using a private exploit.
The exploit was effective in providing remote root access in under thirty
seconds. This vulnerability had been in the Samba code for over eight
years, there is still speculation as to how long people had been
exploiting it in the wild. When the advisory and exploit code was made
public, there was a massive backlash about how we were "endangering
millions of users". They failed to take into account that the people they
need to worry about already had the means to exploit to vulnerability.
This "bury your head in the sand" attitude is the reason why I continue to
release exploits the way I do and why I have no plans to remove the DCOM
code from my web site. The problem with buggy exploits is that when
someone uses them to test their system and they don't work, the common
reaction is to put off patching until a more convenient time. The public
perception seems to be that risks are directly associated with available
exploits. This is far from the truth. The reasoning behind it can be
directly attributed the media in general and a few large security
companies in particular.
There is a common thread among the media when reporting security issues:
"Whose fault is it?". If the exploit for a vulnerability is not public
yet, they tend to blame the vendor. If an expoit has been released, they
blame the author. If a worm is on the lose that utilizes the
vulnerability as an attack vector, they blame the end users and
administrators.
The result of all this is that people stopped releasing exploits to avoid
bad press. I can name at least three well-known researchers who have
stopped releasing code simply because of the media backlash they received
when doing so. The really twisted part about this are the quotes the
reporters use when writing their articles. Often they will call people
who work at security companies that practice non-disclosure and ask them
what they think of an exploit release. The ironic part is that many of
the people who provide these quotes depend on publicly available code to
provide their own penetration testing services.
The really silly part is that now there are companies who make thier bread
money by selling commercial exploits. What makes them "responsible" in
the public eye is that they only sell thier tools to people with "good
intentions". This usually means they verify that you have a job where you
have a legitamate need for their product. This verification procedure can
only be so strict, many large institutions may want to use this product
to gauge their risk to given vulnerability. Not all of the clients
restrict their use of the product to work related activities. As of a
couple months ago, pirated copies of these products are already
circulating in the underground.
There are actually a number of good reasons for not releasing exploit
code. Public exploits tend to provide capabilities to the average user
that they really shouldn't have. This results in many less-skilled users
compromising hosts with public exploits. I certainly agree that this is
not a desirable situation.
The main problem is that there is not a way to securely distribute
exploits to the people who need them legitamately and still keep them out
of the hands of those who have malicious intent. Many times these are the
same people; there have been quite a few high-profile criminal cases
where the perpetrator had a day job as a system administrator. A
registration system may slow down the public distribution of an exploit,
but the disclosure timeline will get progressively smaller inversely
proportional to the number of users.
In the future, I might look into alternative methods of distribution as
well some benign exploit payloads, but I do not believe that releasing
crippled code or binary-only proof-of-concepts will make a significant
impact on the availabilty of functional exploits. I feel that the benefit
of having publicly available exploit code far outweighs the side effect
of arming the less-skilled and less-connected malicous parties.
See you in Vegas?
-HD
_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.netsys.com/full-disclosure-charter.html
---------------------------------
Do you Yahoo!?
The New Yahoo! Search - Faster. Easier. Bingo.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.grok.org.uk/pipermail/full-disclosure/attachments/20030727/0575ac8a/attachment.html
Powered by blists - more mailing lists