lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 15 Feb 2012 12:34:24 -0700
From: Sanguinarious Rose <SanguineRose@...ultusTerra.com>
To: Lucas Fernando Amorim <lf.amorim@...oo.com.br>,
	full-disclosure@...ts.grok.org.uk
Subject: Re: Arbitrary DDoS PoC

On Wed, Feb 15, 2012 at 7:53 AM, Lucas Fernando Amorim
<lf.amorim@...oo.com.br> wrote:
> How do I subscribe only to the short list have to keep answering this
> bizarre way, so I apologize. If someone has an alternative way, please tell
> me.

Change your settings where you subscribed.

>
> I do not know what you expect of public repos at Github, really do not
> understand, you think that I would deliver the gold as well? Well, I think
> you're a guy too uninformed to find that the maximum is 200 threads with
> pthread. Have you tried ulimit -a? I even described in the readme.
>

Missing the point that async would have drastic improvements on
anything network base, even if you increase it to say 500 threads a
async model still pawns anything using threads for simple
connect/disconnect handling.

> As the algorithm recaptcha, you really thought it would have all code in the
> main file? Why would I do that? I distributed in classes.

No, there wasn't. It was 12 lines of code which just called another
OCR library. (could be why you deleted the public repo this morning)

I did hear google cache does a good job of uncovering "OMG RAGE DELETE"

http://webcache.googleusercontent.com/search?q=cache%3Ahttps%3A%2F%2Fgithub.com%2Flfamorim%2Frebreaker&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a

I do have to declare myself the defaulted winner of this engagement
now because if you have to delete stuff in order to claim facts about
it...

>
> And why do you think IntensiveDoS accepts arguments and opens and closes a
> socket? Why is a snippet of code to not only HTTP DoS.

I read the code could be why.

>
> As for the trojan, you really think I would do something better and leave
> the public?
>
> What planet do you live?
>

Totally because a bindshell trojan that connects to a port is
something highly special that the world will end if someone got a hold
of such a dangerous piece of code. In fact, why isn't the world ended
yet when you can just google and get a few dozen of them?

Should I tell you how "dangerous" and what "planet" do you live on to
release your so so very dangerous innovative python code? (hypocrisy
for the win!)

> And Curl is a great project to parallel HTTP connections, python is not so
> much, and that is why only the fork stays with him.
>

Curl is indeed great I agree. The rest I don't see as even a point
going anywhere?

>
> On 14-02-2012 02:48, Lucas Fernando Amorim wrote:
>
> On Feb 13, 2012 4:37 AM, "Lucas Fernando Amorim" <lf.amorim@...oo.com.br>
> wrote:
>>
>> With the recent wave of DDoS, a concern that was not taken is the model
>> where the zombies were not compromised by a Trojan. In the standard
>> modeling of DDoS attack, the machines are purchased, usually in a VPS,
>> or are obtained through Trojans, thus forming a botnet. But the
>> arbitrary shape doesn't need acquire a collection of computers.
>> Programs, servers and protocols are used to arbitrarily make requests on
>> the target. P2P programs are especially vulnerable, DNS, internet
>> proxies, and many sites that make requests of user like Facebook or W3C,
>> also are.
>>
>> Precisely I made a proof-of-concept script of 60 lines hitting most of
>> HTTP servers on the Internet, even if they have protections likely
>> mod_security, mod_evasive. This can be found on this link [1] at GitHub.
>> The solution of the problem depends only on the reformulation of
>> protocols and limitations on the number of concurrent requests and
>> totals by proxies and programs for a given site, when exceeded returning
>> a cached copy of the last request.
>>
>> [1] https://github.com/lfamorim/barrelroll
>>
>> Cheers,
>> Lucas Fernando Amorim
>> http://twitter.com/lfamorim
>>
>> _______________________________________________
>> Full-Disclosure - We believe in it.
>> Charter: http://lists.grok.org.uk/full-disclosure-charter.html
>> Hosted and sponsored by Secunia - http://secunia.com/
>
>
>

_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ