lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 29 Nov 2012 16:39:30 -0000
From:	"David Laight" <David.Laight@...LAB.COM>
To:	"Jesper Dangaard Brouer" <brouer@...hat.com>,
	"Eric Dumazet" <eric.dumazet@...il.com>,
	"David S. Miller" <davem@...emloft.net>,
	"Florian Westphal" <fw@...len.de>
Cc:	<netdev@...r.kernel.org>,
	"Pablo Neira Ayuso" <pablo@...filter.org>,
	"Thomas Graf" <tgraf@...g.ch>, "Cong Wang" <amwang@...hat.com>,
	"Patrick McHardy" <kaber@...sh.net>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	"Herbert Xu" <herbert@...dor.hengli.com.au>
Subject: RE: [net-next PATCH V2 9/9] net: increase frag queue hash size andcache-line

> Increase frag queue hash size and assure cache-line alignment to
> avoid false sharing.  Hash size is set to 256, because I have
> observed 206 frag queues in use at 4x10G with packet size 4416 bytes
> (three fragments).

Since it is a hash list, won't there always be workloads where
there are hash collisions?
I'm not sure I'm in favour of massively padding out structure
to the size of cache lines.

Both these changes add a moderate amount to the kernel data size
(those people who are worried about not being able to discard
code because hot_plug is always configured really ought to
be worried about the footprint of some of these hash tables).

We run Linux on embedded (small) ppc where there might only
be a handful of TCP connections, these sort of tables use up
precious memory.

While padding to cache line might reduce the number of cache
snoops when attacking the code, in a real life situation I
suspect the effect of making another cache line busy is as
likely to flush out some other important data.
This is similar to the reasons that excessive function inlining
and loop unrolling will speed up benchmarks but slow down real
code.

	David

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ