lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <8736161D-E895-4327-A338-9BF812279A90@webweaving.org>
Date: Thu, 25 Aug 2011 09:46:01 +0100
From: Dirk-Willem van Gulik <dirkx@...weaving.org>
To: full-disclosure@...ts.grok.org.uk
Subject: Re: Apache Killer


On 25 Aug 2011, at 05:54, Michal Zalewski wrote:

>> just for the record I have the impression that this not the same vulnerability
>> you outlined in your advisory a while back. It is more that the idea
>> for this vulnerability originated from your advisory, not the same bug.
> 
> I don't think this even matters, and I really don't disagree...
> 
> In 2007, I noticed that their Range handling is silly, and may prompt
> them to generate very large responses.

Hmm - I think we are conflating two issues:

1)	The contemporary interpretation of RFC 2616 allows for multiple
	overlapping ranges; requires them to be returned in order and
	has easy ways (e.g. 0-) of asking for a lot of data.

This basically means that a small request can cause the server to prep a lot
of data and, assuming the TCP connection looks alive to the server; lots of
data to be sent out. And because Ranges are fundamentally about arbitrary
ranges - this means hard work for the server.

This is an issue for all web servers implementing Range according to RFC2616
properly. This strict interpretation is currently discussed in the IETF:

	http://trac.tools.ietf.org/wg/httpbis/trac/ticket/311

This issue is, a Michal's 2007 email importantly points out, cross server. Not 
just Apache. And I am sure we've not heard the last of this.

Then there something very apache 1.3/2.0/2.2 specific

2)	Under certain (very common) conditions Apache internally is really 
	rather inefficient at handling such request which 'explode' into
	many 100's of internal requests for large byte ranges.

	This is purely due to how it sets up the various bucket brigades
	and a lingering/remnant of the good old 0.9 times where one
	request meant one file and one reply.

The latter needs to be fixed in the apache code - and it would not surprise
me if we'll see more issues like this; where design assumptions from the old
1:1:1 times are no longer valid.

Thanks,

Dw.



_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ