lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 6 Mar 2008 18:54:19 -0800 (PST)
From:	Christoph Lameter <clameter@....com>
To:	Nick Piggin <npiggin@...e.de>
cc:	David Miller <davem@...emloft.net>, netdev@...r.kernel.org,
	linux-kernel@...r.kernel.org, yanmin_zhang@...ux.intel.com,
	dada1@...mosbay.com
Subject: Re: [rfc][patch 1/3] slub: fix small HWCACHE_ALIGN alignment

> It doesn't say start of cache line. It says align them *on* cachelines.
> 2 32 byte objects on a 64 byte cacheline are aligned on the cacheline.
> 2.67 24 bytes objects on a 64 byte cacheline are not aligned on the
> cacheline.

2 32 byte objects means only one is aligned on a cache line.

Certainly cacheline contention is reduced and performance potentially 
increased if there are less objects in a cacheline.

The same argument can be made of aligning 8 byte objects on 32 byte 
boundaries. Instead of 8 objects per cacheline you only have two. Why 8?

Isnt all of this a bit arbitrary and contrary to the intend of avoiding 
cacheline contention?

The cleanest solution to this is to specify the alignment for each 
slabcache if such an alignment is needed. The alignment can be just a 
global constant or function like cache_line_size(). 

I.e. define

	int smp_align;

On bootup check the number of running cpus and then pass smp_align as
the align parameter (most slabcaches have no other alignment needs, if 
they do then we can combine these using max).

If we want to do more sophisticated things then lets have function that
aligns the object on power of two boundaries like SLAB_HWCACHE_ALIGN does
now:

	sub_cacheline_align(size)

Doing so will make it more transparent as to what is going on and which 
behavior we want. And we can get rid of SLAB_HWCACHE_ALIGN with the weird
semantics. Specifying smp_align wil truly always align on a cacheline 
boundary if we are on an SMP system.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists