lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 19 Apr 2013 14:45:28 +0200
From:	Hannes Frederic Sowa <hannes@...essinduktion.org>
To:	Jesper Dangaard Brouer <brouer@...hat.com>
Cc:	Eric Dumazet <eric.dumazet@...il.com>,
	"David S. Miller" <davem@...emloft.net>, netdev@...r.kernel.org
Subject: Re: [net-next PATCH 2/3] net: fix enforcing of fragment queue hash list depth

On Fri, Apr 19, 2013 at 02:19:10PM +0200, Jesper Dangaard Brouer wrote:
> I beg to differ.
> 
> First if all, I also though keeping fragments were the right choice, BUT
> then I realized that we cannot tell god-from-bad frags apart, and an
> attacker will obviously generate more fragments than the real traffic,
> thus we end up keeping the attackers fragment for 30 sec. Because the
> attacker will never "finish/complete" his fragment queues.
> Thus, the real "help" for a DoS attack is to keep fragments.
>
> Second, I have clearly shown, with my performance tests, that frags will
> complete under DoS.

How did you simulate the DoS? Did you try to fill up specific buckets or did
you try to fill up the whole fragment cache?

> > I am not sure its worth adding extra complexity.
> 
> It's not that complex, and we simply need it, else an attacker can DoS
> us very easily by sending a burst every 30 sec.  We do need this change,
> else we must revert Hannes patch, and find a complete other approach of
> removing the LRU list system.

I agree that we have to do something to mitigate this. I would also
drop the sysctl, I do think the kernel should have to take care of that.

Perhaps a way to solve this problem more easily would be to switch back to
use a jenkins hash for all inputs of the hash function and not deal with this
kind of unfairness regarding the possiblity to fill up one of the buckets.

Perhaps the performance improvements by Jesper (per bucket locking) could make
up the more expensive hashing. :)

Just some minor things I found while looking at the patch:

> -
> -/* averaged:
> - * max_depth = default ipfrag_high_thresh / INETFRAGS_HASHSZ /
> - *	       rounded up (SKB_TRUELEN(0) + sizeof(struct ipq or
> - *	       struct frag_queue))
> - */
> -#define INETFRAGS_MAXDEPTH		128
> +#define INETFRAGS_MAX_HASH_DEPTH	32

Perhaps a comment could be added, that this is only a default value for the
max hash depth.
 
 
> -void inet_frag_kill(struct inet_frag_queue *fq, struct inet_frags *f)
> +void __inet_frag_kill(struct inet_frag_queue *fq, struct inet_frags *f,
> +		      bool unlink_hash)

This function could be static.

> -	hlist_for_each_entry(q, &hb->chain, list) {
> +	hlist_for_each_entry_safe(q, n, &hb->chain, list) {

Minor, but I think _safe is not needed here.

> +static int zero;
> +

Could be const.

Thanks,

  Hannes

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists