lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 7 Mar 2008 03:23:55 +0100
From:	Nick Piggin <npiggin@...e.de>
To:	Christoph Lameter <clameter@....com>
Cc:	David Miller <davem@...emloft.net>, netdev@...r.kernel.org,
	linux-kernel@...r.kernel.org, yanmin_zhang@...ux.intel.com,
	dada1@...mosbay.com
Subject: Re: [rfc][patch 1/3] slub: fix small HWCACHE_ALIGN alignment

On Thu, Mar 06, 2008 at 02:56:42PM -0800, Christoph Lameter wrote:
> On Thu, 6 Mar 2008, Nick Piggin wrote:
> 
> > There looks like definitely some networking slabs that pass the flag
> > and can be non-power-of-2. And don't forget cachelines can be anywhere
> > up to 256 bytes in size. So yeah it definitely makes sense to merge
> > the patch and then examine the callers if you feel strongly about it.
> 
> Just do not like to add fluff that has basically no effect. I just tried 
> to improve things by not doing anything special if we cannot cacheline 
> align object. Least surprise (at least for me). It is bad enough that we 
> just decide to ignore the request for alignment for small caches.

That's just because you (apparently still) have a misconception about what
the flag is supposed to be for. It is not for aligning things to the start
of a cacheline boundary. It is not for avoiding false sharing on SMP. It
is for ensuring that a given object will span the fewest number of
cachelines. This can actually be important if you do anything like random
lookups or tree walks where the object contains the tree node.

Consider a 64 byte cacheline, and a 24 byte object:
cacheline  |-------|-------|-------
object     |--|--|--|--|--|--|--|--

So if you touch 8 random objects, it is statistically likely to cost you
10 cache misses (so long as the working set is sufficiently cold / larger
than cache that cacheline sharing is insignificant).

If you actually honour HWCACHE_ALIGN, then the same object will be 32
bytes:
cacheline  |-------|
object     |---|---|

Now 8 will cost 8. A 20% saving. Maybe almost a 20% performance improvement.

Before we go around in circles again, do you accept this? If yes, then
what is your argument that SLUB knows better than the caller; if no, then
why not?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ