[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080306025758.GB27150@wotan.suse.de>
Date: Thu, 6 Mar 2008 03:57:58 +0100
From: Nick Piggin <npiggin@...e.de>
To: Christoph Lameter <clameter@....com>
Cc: David Miller <davem@...emloft.net>, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, yanmin_zhang@...ux.intel.com,
dada1@...mosbay.com
Subject: Re: [rfc][patch 1/3] slub: fix small HWCACHE_ALIGN alignment
On Wed, Mar 05, 2008 at 01:06:30PM -0800, Christoph Lameter wrote:
> On Tue, 4 Mar 2008, David Miller wrote:
>
> > > Huh?? It is not a new definition, it is exactly what SLAB does. And
> > > then you go and do something different and claim that you follow
> > > what slab does.
> >
> > I completely agree with Nick.
>
> So you also want subalignment because of cacheline crossing for 24 byte
> slabs? We then only have 2 objects per cacheline instead of 3 but no
> crossing anymore.
>
> Well okay if there are multiple requests then lets merge Nick's patch that
> does this. Still think that this will do much ...
> Instead of 170 we will only have 128 objects per slab (64 byte
> cacheline).
That's what callers expect when they pass the HWCACHE flag. Wouldn't it
be logical to fix the callers if you think it costs too much memory with
not enough improvement?
> It will affect the following slab caches (mm) reducing the density of
> objects.
>
> scsi_bidi_sdb numa_policy fasync_cache xfs_bmap_free_item xfs_dabuf
> fstrm_item dm_target_io
>
> Nothing related to networking....
There looks like definitely some networking slabs that pass the flag
and can be non-power-of-2. And don't forget cachelines can be anywhere
up to 256 bytes in size. So yeah it definitely makes sense to merge
the patch and then examine the callers if you feel strongly about it.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists