lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 22 Apr 2013 11:10:34 +0200
From:	Jesper Dangaard Brouer <brouer@...hat.com>
To:	Hannes Frederic Sowa <hannes@...essinduktion.org>
Cc:	Eric Dumazet <eric.dumazet@...il.com>,
	"David S. Miller" <davem@...emloft.net>, netdev@...r.kernel.org
Subject: Re: [net-next PATCH 2/3] net: fix enforcing of fragment queue hash
 list depth

On Fri, 2013-04-19 at 21:44 +0200, Hannes Frederic Sowa wrote:
> On Fri, Apr 19, 2013 at 04:29:02PM +0200, Jesper Dangaard Brouer wrote:
> > Well, I don't know.  But we do need some solution, to the current code.
> 
> In <http://article.gmane.org/gmane.linux.network/261361> I said that we could
> actually have a list lengt of about 370. At this time this number was stable,
> perhaps you could verify?
> 
> I tried to flood the cache with very minimal packets so this was actually
> the hint that I should have resized the hash back then. With the current
> fragmentation cache design we could reach optimal behaviour as soon
> as the memory limits kicks in and lru eviction starts before we limit the
> fragments queues in the hash chains. The only way to achieve this is to
> increase the hash table slots and lower the maximum length limit. I would
> propose a limit of about 25-32 and as Eric said, a hash size of 1024. We could
> test if we are limited of accepting new fragments by memory limit (which would
> be fine because lru eviction kicks in) or by chain length (we could recheck
> the numbers then).
> 
> So the chain limit would only kick in if someone tries to exploit the fragment
> cache by using the method I demonstrated before (which was the reason I
> introduced this limit).

(To avoid pissing people off) I acknowledge that we should change the
hash size, as its ridiculously small with 64 entries.

But your mem limit assumption and hash depth limit assumptions are
broken, because the mem limit is per netns (network namespace).
Thus, starting more netns instances will break these assumptions.

The dangerous part of your change (commit 5a3da1fe) is that you keep the
existing frag queues (and don't allow new frag queues to be created).
The attackers fragments will never finish (timeout 30 sec), while valid
fragments will complete and "exit" the queue, thus the end result is
hash bucket is filled with attackers invalid/incomplete fragments.


Besides, after we have implemented per hash bucket locking (in my change
commit 19952cc4 "net: frag queue per hash bucket locking").
Then, I don't think it is a big problem that a single hash bucket is
being "attacked".

IMHO we should just revert the change (commit 5a3da1fe), and increase
the hash size and fix the hashing for IPv6.


And then I'll find another method for fixing the global LRU list
scalability problem (than the "direct-hash-cleaning" method).


-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Sr. Network Kernel Developer at Red Hat
  Author of http://www.iptv-analyzer.org
  LinkedIn: http://www.linkedin.com/in/brouer


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ