[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200902041522.01307.nickpiggin@yahoo.com.au>
Date: Wed, 4 Feb 2009 15:22:00 +1100
From: Nick Piggin <nickpiggin@...oo.com.au>
To: Pekka Enberg <penberg@...helsinki.fi>
Cc: Christoph Lameter <cl@...ux-foundation.org>,
Nick Piggin <npiggin@...e.de>,
"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>,
Lin Ming <ming.m.lin@...el.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [patch] SLQB slab allocator
On Wednesday 04 February 2009 05:47:48 Pekka Enberg wrote:
> On Tue, Feb 3, 2009 at 8:42 PM, Pekka Enberg <penberg@...helsinki.fi> wrote:
> >> It will grow unconstrained if you elect to defer queue processing. That
> >> was what we discussed.
> >
> > Well, the slab_hiwater() check in __slab_free() of mm/slqb.c will cap
> > the size of the queue. But we do the same thing in SLAB with
> > alien->limit in cache_free_alien() and ac->limit in __cache_free(). So
> > I'm not sure what you mean when you say that the queues will "grow
> > unconstrained" (in either of the allocators). Hmm?
>
> That said, I can imagine a worst-case scenario where a queue with N
> objects is pinning N mostly empty slabs. As soon as we hit the
> periodical flush, we might need to do tons of work. That's pretty hard
> to control with watermarks as well as the scenario is solely dependent
> on allocation/free patterns.
That's very true, and we touched on this earlier. It is I guess
you can say a downside of queueing. But an analogous situation
in SLUB would be that lots of pages on the partial list with
very few free objects, or freeing objects to pages with few
objects in them. Basically SLUB will have to do the extra work
in the fastpath.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists