lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 23 Apr 2013 22:54:23 +0200
From:	Hannes Frederic Sowa <hannes@...essinduktion.org>
To:	Jesper Dangaard Brouer <brouer@...hat.com>
Cc:	Eric Dumazet <eric.dumazet@...il.com>,
	"David S. Miller" <davem@...emloft.net>, netdev@...r.kernel.org
Subject: Re: [net-next PATCH 2/3] net: fix enforcing of fragment queue hash list depth

On Tue, Apr 23, 2013 at 04:19:23PM +0200, Jesper Dangaard Brouer wrote:
> Yes, traffic patterns do affect the results, BUT you have to be really
> careful profiling this:
> 
> Notice, that inet_frag_find() also indirectly takes the LRU lock, and
> the perf tool will blame inet_frag_find().  This is very subtle and
> happens with a traffic pattern that want to create new frag queues (e.g.
> not found in the hash list).
> The problem is that inet_frag_find() calls inet_frag_create() (if q is
> not found) which calls inet_frag_intern() which calls
> inet_frag_lru_add() taking the LRU lock.  All of these functions gets
> inlined by the compiler, thus inet_frag_find() gets the blame.
> 
> 
> To avoid pissing people off:

Ah, come on. I don't think you do that. ;)

> Yes, having a long list in the hash bucket is obviously also contributes
> significantly.  Yes, we still should increase the hash bucket size.  I'm
> just pointing out be careful about what you actually profile ;-)
> 
> 
> Please see below, profiling of current next-next, with "noinline" added
> to inet_frag_intern, inet_frag_alloc and inet_frag_create.  Run under
> test 20G3F+MQ. I hope you can see my point with the LRU list lock,
> please let me know if I have missed something.

I have no objections. My ipv6 test case simply does not get the memory
usage of the fragment cache over the thresholds, so I have no contention
there, just dropping because of the 128 list length limit.

As soon as I would fill up the fragment cache the contention will be on the
lru lock as you showed here.

Greetings,

  Hannes

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ