[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0705161139540.10265@schroedinger.engr.sgi.com>
Date: Wed, 16 May 2007 11:43:55 -0700 (PDT)
From: Christoph Lameter <clameter@....com>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
cc: Matt Mackall <mpm@...enic.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, Thomas Graf <tgraf@...g.ch>,
David Miller <davem@...emloft.net>,
Andrew Morton <akpm@...ux-foundation.org>,
Daniel Phillips <phillips@...gle.com>,
Pekka Enberg <penberg@...helsinki.fi>
Subject: Re: [PATCH 0/5] make slab gfp fair
On Wed, 16 May 2007, Peter Zijlstra wrote:
> On Tue, 2007-05-15 at 15:02 -0700, Christoph Lameter wrote:
> > On Tue, 15 May 2007, Peter Zijlstra wrote:
> >
> > > How about something like this; it seems to sustain a little stress.
> >
> > Argh again mods to kmem_cache.
>
> Hmm, I had not understood you minded that very much; I did stay away
> from all the fast paths this time.
Well you added a new locking level and changed the locking hierachy!
> The thing is, I wanted to fold all the emergency allocs into a single
> slab, not a per cpu thing. And once you loose the per cpu thing, you
> need some extra serialization. Currently the top level lock is
> slab_lock(page), but that only works because we have interrupts disabled
> and work per cpu.
SLUB can only allocate from a per cpu slab. You will have to reserve one
slab per cpu anyways unless we flush the cpu slab after each access. Same
thing is true for SLAB. It wants objects in its per cpu queues.
> Why is it bad to extend kmem_cache a bit?
Because it is for all practical purposes a heavily accessed read only
structure. Modifications only occur to per node and per cpu structures.
In a 4k systems any write will kick out the kmem_cache cacheline in 4k
processors.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists