[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1238483617.26587.32.camel@penberg-laptop>
Date: Tue, 31 Mar 2009 10:13:37 +0300
From: Pekka Enberg <penberg@...helsinki.fi>
To: Christoph Lameter <cl@...ux.com>
Cc: David Rientjes <rientjes@...gle.com>,
Nick Piggin <nickpiggin@...oo.com.au>,
Martin Bligh <mbligh@...gle.com>, linux-kernel@...r.kernel.org
Subject: Re: [patch 2/3] slub: scan partial list for free slabs when
thrashing
On Sun, 29 Mar 2009, David Rientjes wrote:
> > Whenever a cpu cache satisfies a fastpath allocation, a fastpath counter
> > is incrememted. This counter is cleared whenever the slowpath is
> > invoked. This tracks how many fastpath allocations the cpu slab has
> > fulfilled before it must be refilled.
On Mon, 2009-03-30 at 10:37 -0400, Christoph Lameter wrote:
> That adds fastpath overhead and it shows for small objects in your tests.
Yup, and looking at this:
+ u16 fastpath_allocs; /* Consecutive fast allocs before slowpath */
+ u16 slowpath_allocs; /* Consecutive slow allocs before watermark */
How much do operations on u16 hurt on, say, x86-64? It's nice that
sizeof(struct kmem_cache_cpu) is capped at 32 bytes but on CPUs that
have bigger cache lines, the types could be wider.
Christoph, why is struct kmem_cache_cpu not __cacheline_aligned_in_smp
btw?
Pekka
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists