[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4F3C1479.2020708@yahoo.com.br>
Date: Wed, 15 Feb 2012 18:24:25 -0200
From: Lucas Fernando Amorim <lf.amorim@...oo.com.br>
To: Sanguinarious Rose <SanguineRose@...ultusTerra.com>
Cc: full-disclosure@...ts.grok.org.uk
Subject: Re: Arbitrary DDoS PoC
I will not answer this anymore, sorry for feeding trolls.
On 15-02-2012 17:34, Sanguinarious Rose wrote:
> On Wed, Feb 15, 2012 at 7:53 AM, Lucas Fernando Amorim
> <lf.amorim@...oo.com.br> wrote:
>
>> I do not know what you expect of public repos at Github, really do not
>> understand, you think that I would deliver the gold as well? Well, I think
>> you're a guy too uninformed to find that the maximum is 200 threads with
>> pthread. Have you tried ulimit -a? I even described in the readme.
> Missing the point that async would have drastic improvements on
> anything network base, even if you increase it to say 500 threads a
> async model still pawns anything using threads for simple
> connect/disconnect handling.
>
Feel free to implement. ;)
>
>> As the algorithm recaptcha, you really thought it would have all code in the
>> main file? Why would I do that? I distributed in classes.
>>
> No, there wasn't. It was 12 lines of code which just called another
> OCR library. (could be why you deleted the public repo this morning)
>
> I did hear google cache does a good job of uncovering "OMG RAGE DELETE"
>
> http://webcache.googleusercontent.com/search?q=cache%3Ahttps%3A%2F%2Fgithub.com%2Flfamorim%2Frebreaker&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a
>
>
> I do have to declare myself the defaulted winner of this engagement
> now because if you have to delete stuff in order to claim facts about
> it...
>
>
Winner of what? Thats a priv8 repos now. Did you looked at utils directory?
There was an algorithm to find the ellipses of the captcha, that he was
developing to walk the edge, correcting the distortion.
>> And why do you think IntensiveDoS accepts arguments and opens and closes a
>> socket? Why is a snippet of code to not only HTTP DoS.
>>
> I read the code could be why.
>
I'm making another question. Why you think IntesiveDoS accepts arguments?
>> As for the trojan, you really think I would do something better and leave
>> the public?
>>
>> What planet do you live?
>>
> Totally because a bindshell trojan that connects to a port is
> something highly special that the world will end if someone got a hold
> of such a dangerous piece of code. In fact, why isn't the world ended
> yet when you can just google and get a few dozen of them?
>
> Should I tell you how "dangerous" and what "planet" do you live on to
> release your so so very dangerous innovative python code? (hypocrisy
> for the win!)
>
>
There's nothing special, but is the only code of this on GitHub. Fell
free to fork and share. And thats dangerous? I think not, but run nowadays.
>> And Curl is a great project to parallel HTTP connections, python is not so
>> much, and that is why only the fork stays with him.
>>
>>
> Curl is indeed great I agree. The rest I don't see as even a point
> going anywhere?
>
If curl is a good project and written in C, why reason I will implement
the same thing in Python?
>> On 14-02-2012 02:48, Lucas Fernando Amorim wrote:
>>
>> On Feb 13, 2012 4:37 AM, "Lucas Fernando Amorim"<lf.amorim@...oo.com.br>
>> wrote:
>>
>>> With the recent wave of DDoS, a concern that was not taken is the model
>>> where the zombies were not compromised by a Trojan. In the standard
>>> modeling of DDoS attack, the machines are purchased, usually in a VPS,
>>> or are obtained through Trojans, thus forming a botnet. But the
>>> arbitrary shape doesn't need acquire a collection of computers.
>>> Programs, servers and protocols are used to arbitrarily make requests on
>>> the target. P2P programs are especially vulnerable, DNS, internet
>>> proxies, and many sites that make requests of user like Facebook or W3C,
>>> also are.
>>>
>>> Precisely I made a proof-of-concept script of 60 lines hitting most of
>>> HTTP servers on the Internet, even if they have protections likely
>>> mod_security, mod_evasive. This can be found on this link [1] at GitHub.
>>> The solution of the problem depends only on the reformulation of
>>> protocols and limitations on the number of concurrent requests and
>>> totals by proxies and programs for a given site, when exceeded returning
>>> a cached copy of the last request.
>>>
>>> [1] https://github.com/lfamorim/barrelroll
>>>
>>> Cheers,
>>> Lucas Fernando Amorim
>>> http://twitter.com/lfamorim
>>>
>>> _______________________________________________
>>> Full-Disclosure - We believe in it.
>>> Charter: http://lists.grok.org.uk/full-disclosure-charter.html
>>> Hosted and sponsored by Secunia - http://secunia.com/
>>>
>>
>>
>>
Content of type "text/html" skipped
_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/
Powered by blists - more mailing lists