lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1366895438.26911.572.camel@localhost>
Date:	Thu, 25 Apr 2013 15:10:38 +0200
From:	Jesper Dangaard Brouer <brouer@...hat.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	"David S. Miller" <davem@...emloft.net>,
	Hannes Frederic Sowa <hannes@...essinduktion.org>,
	netdev@...r.kernel.org
Subject: Re: [net-next PATCH 1/4] Revert "inet: limit length of fragment
 queue hash table bucket lists"

On Wed, 2013-04-24 at 17:00 -0700, Eric Dumazet wrote:
> On Wed, 2013-04-24 at 17:48 +0200, Jesper Dangaard Brouer wrote:
> > This reverts commit 5a3da1fe9561828d0ca7eca664b16ec2b9bf0055.
> > 
> > The problem with commit 5a3da1fe (inet: limit length of fragment queue
> > hash table bucket lists) is that, once we hit the hash depth limit (of
> > 128), the we *keep* the existing frag queues, not allowing new frag
> > queues to be created.  Thus, an attacker can effectivly block handling
> > of fragments for 30 sec (as each frag queue have a timeout of 30 sec)
> > 
> 
> I do not think its good to revert this patch. It was a step in right
> direction.

But in its current state I consider this patch dangerous.

> Limiting chain length to 128 is good.

Yes, its good to have a limit on the hash depth, but with current mem
limit per netns this creates an unfortunate side-effect.  There is a
disconnect between the netns mem limits and the global hash table.

Even with hash size of 1024, this just postpones the problem.
We now need 35 netns instances to reach the point where we block all
fragments for 30 sec.

Given a min frag size of 1108 bytes:
  1024*128*1108 = 145227776 
  145227776/(4*1024*1024) = 34.62500000000000000000

The reason this is inevitable, is the attackers invalid fragments will
never finish (timeout 30 sec), while valid fragments will complete and
"exit" the queue, thus the end result is hash bucket is filled with
attackers invalid/incomplete fragments.  IMHO this is a very dangerous
"feature" to support.

--Jesper



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ