lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 21 Jul 2005 08:42:55 +1000 (Australia/ACT)
From: Darren Reed <avalon@...igula.anu.edu.au>
To: fernando@....utn.edu.ar (Fernando Gont)
Cc: full-disclosure@...ts.grok.org.uk, bugtraq@...urityfocus.com
Subject: Re: ICMP-based blind performance-degrading attack


Ok, so you really think this is new...

Go look in the bugtraq archives for 8 July 2001 and you might find an
email like the one below.  THere was a thread on this topic then.

It would be nice if you included a referral or something in your IETF
draft to my original work on this, 4 years ago.  Unless you want to
try and proclaim that discussion on bugtraq isn't public knowledge
and your discoveries are somehow "new".

This is just one pointer to a start of the thread then...
http://cert.uni-stuttgart.de/archive/bugtraq/2001/07/msg00124.html

And the original email is included below, just in case.

Good luck with your further research on IP "security issues".

Cheers,
Darren

Subject: Small TCP packets == very large overhead == DoS?
To: bugtraq@...urityfocus.com
Date: Sun, 8 Jul 2001 01:47:21 +1000 (Australia/ACT)
X-Mailer: ELM [version 2.5 PL1]
Content-Length: 6209      
Status: OR


On a lan far far away, a rouge packet was heading towards a
server, ready to start up a new storm ...

[I was going to start this by saying "years ago" but well...
that might be showing my age ;)]

Anyway, down to business.

If any of you have tested what happens to the ability of a box to
perform well when it has a small MTU you will know that setting the
MTY to (say) 56 on a diskless thing is a VERY VERY bad idea when NFS
read/write packets are generally 8k in size.  Do not try it on a NFS
thing unless you plan to reboot it, ok ?  Last time I did this was
when I worked out you could fragment packets inside the TCP header
and that lesson was enough for me ;_)

Following on from this, it occurs to me that the problem with the
above can possibly be reproduced with TCP.  How ?  That thing called
"maximum segment size".  The problem?  Well, the first is that there
does not appear to be a minimum.  The second is that it is negoiated
by the caller, not callee.  Did I hear someone say "oh dear" ?

What's this mean?  Well, if I connect to www.microsoft.com and set
my MSS to 143 (say), they need to send me 11 packets for every one
they would normally send me (with an MSS of 1436).  Total output
for them is 1876 bytes - a 30% increase.  However, that's not the
real problem.  My experience is that hosts, especially PC's, have
a lot of trouble handling *LOTS* of interrupts.  To send 2k out
via the network, it's no longer 2 packets but 20+ - a significant
increase in the workload.

A quick table (based on 20byte IP & TCP header)::
datalen    mss     packets     total bytes bytes %increase
1436       1436       1           1476            0
1436       1024       2           1516            3%
1436        768       2           1516            3%
1436        512       3           1556            5%
1436        256       6           1676           13%
1436        128      12           1916           30%
1436         64      23           2356           69%
1436         32      45           3236          119%
1426         28      52           3516          238% (MTU = 68)
1436         16      90           5036          241%
1436          8     180           8636          485%
1436          1    1436          58876         3989%

For Solaris, you can enforce a more sane minimum MSS than the
install default (1) with ndd:
ndd -set /dev/tcp tcp_mss_min 128

HP-UX 11.* is in the same basket as Solaris.

*BSD have varying minimums well above 1 - NetBSD at 32, FreeBSD at 64.
(OpenBSD's comment on this says 32 but the code says 64 - mmm, qwality!)

Linux 2.4 is 88

I can't see anything in the registry or MSDN which says what it
is for Windows.  By experimentation, Win2000 appears to be 88,
NT 4 appears to be 1

Do I need to mention any other OS ? :*)

Nothing else besides Solaris seems to have anything close to a
reasonable manner in which to tune the minimum value.

What's most surprising is that there does not appear to be a documented
minimum, just as there is no "minimum MTU" size for IP.  If there is,
please correct me.

About the only bonus to this is that there does not appear to be an
easy way to affect the MSS sent in the initial SYN packet.

Oh, so how's this a potential denial of service attack?  Generally,
network efficiency comes through sending lots of large packets...but
don't tell ATM folks that, of course :-)  Does it work?  *shrug* It
is not easy to test...the only testing I could do (with NetBSD) was
to use the TCP_MAXSEG setsockopt BUT this only affects the sending
MSS (now what use is that ? :-), but in testing, changing it from
the default 1460 to 1 caused number of packets to go from 9 to 2260
to write 1436 bytes of data to discard.  To send 100 * 1436 from
the NetBSD box to Solaris8 took 60 seconds (MSS of 1) vs ~1 with
an MSS of 1460.  Of even more significance, one connection like
this made almost no difference after the first run but running a
second saw kernel CPU jump to 30% on an SS20/712 (I suspect there
are some serious TCP tuning happening dynamically).  The sending
host was likewise afflicted with a signifcant CPU usage penalty if
more than one was running.  There were some very surprising things
happening too - with just one session active, ~170-200pps were
seen with netstat on Solaris, but with the second, it was between
1750 and 1850pps.  Can you say "ACK storm" ?  Oh, and for fun you
can enable TCP timestamping just to make those headers bigger and
run the system a bit harder whilst processing packets!

Oh, I haven't investigated the impact of ICMP PMTU discovery, but
from my reading of at least the BSD source code, the MTU for the
route will be ignored if it is less than the default MSS when
sending out the TCP SYN with the MSS option.  That aside, it will
still impact current connections and would appear to be a way to
force the _current_ MSS below that set at connect time.  On BSD,
it will not accept PMTU updates if the MTU is less than 296, on
Solaris8 and Linux 2.4 it just needs to be above 68 (hmmm, allows
you to get an effective MSS of less than 88 :). mmm, source code.

So, what are defences ?  Quite clearly the host operating system
needs to set a much more sane minimum MSS than 1.  Given there is
no minimum MTU for IP - well, maybe "68" - it's hard to derive
what it should be.  Anything below 40 should just be banned (that's
the point at which you're transmitting 50% data, 50% headers).
Most of the defaults, above, are chosen because it fits in well
with their internal network buffering (some use a default MSS of
512 rather than 536 for similar reasons).  But above that, what
do you choose? 80 for a 25/75 or something higher still?  Whatever
the choice and however it is calculated, it is not enough to just
enforce it when the MSS option is received.  It also needs to be
enforced when the MTU parameter is checked in ICMP "need frag"
packets.

Darren

p.s. I guess if I was one of those corporate types who get paid to
do security stuff I'd write this up as a white paper but like this
is the 'net man!

p.p.s.  So far as I know, nobody has covered this topic, from this
angle, before or if they have, I'm ultralame for not being out on
a saturday night when I could have been.

_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ