[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080307020443.GA21185@wotan.suse.de>
Date: Fri, 7 Mar 2008 03:04:43 +0100
From: Nick Piggin <npiggin@...e.de>
To: Christoph Lameter <clameter@....com>
Cc: Pekka Enberg <penberg@...helsinki.fi>, netdev@...r.kernel.org,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
yanmin_zhang@...ux.intel.com, David Miller <davem@...emloft.net>,
Eric Dumazet <dada1@...mosbay.com>
Subject: Re: [rfc][patch 1/3] slub: fix small HWCACHE_ALIGN alignment
On Thu, Mar 06, 2008 at 02:53:11PM -0800, Christoph Lameter wrote:
> On Thu, 6 Mar 2008, Nick Piggin wrote:
>
> > > That was due to SLUB's support for smaller allocation sizes. AFAICT has
> > > nothing to do with alignment.
> >
> > The smaller sizes meant objects were less often aligned on cacheline
> > boundaries.
>
> Right since SLAB_HWCACHE_ALIGN does not align for very small objects.
It doesn't align small objects to cacheline boundaries in either allocator.
The regression is just because slub can support smaller sizes of objects
AFAIKS.
> > We could, but I'd rather just use the flag.
>
> Do you have a case in mind where that would be useful? We had a
Patch 3/3
> SLAB_HWCACHE_MUST_ALIGN or so at some point but it was rarely to never
> used.
OK, but that's not the same thing.
> Note that there is also KMEM_CACHE which picks up the alignment from
> the compiler.
Yeah, that's not quite as good either. My allocation flag is dynamic, so
it will not bloat things for no reason on UP machines and SMP kernels.
It also aligns to the detected machine cacheline size rather than a
compile time constant.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists