[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0901141219140.26507@quilx.com>
Date: Wed, 14 Jan 2009 12:40:12 -0600 (CST)
From: Christoph Lameter <cl@...ux-foundation.org>
To: Nick Piggin <npiggin@...e.de>
cc: Pekka Enberg <penberg@...helsinki.fi>,
"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>,
Lin Ming <ming.m.lin@...el.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [patch] SLQB slab allocator
On Wed, 14 Jan 2009, Nick Piggin wrote:
> Well if you would like to consider SLQB as a fix for SLUB, that's
> fine by me ;) Actually I guess it is a valid way to look at the problem:
> SLQB solves the OLTP regression, so the only question is "what is the
> downside of it?".
The downside is that it brings the SLAB stuff back that SLUB was
designed to avoid. Queue expiration. The use of timers to expire at
uncontrollable intervals for user space. Object dispersal
in the kernel address space. Memory policy handling in the slab
allocator. Even seems to include periodic moving of objects between
queues. The NUMA stuff is still a bit foggy to me since it seems to assume
a mapping between cpus and nodes. There are cpuless nodes as well as
memoryless cpus.
SLQB maybe a good cleanup for SLAB. Its good that it is based on the
cleaned up code in SLUB but the fundamental design is SLAB (or rather the
Solaris allocator from which we got the design for all the queuing stuff
in the first place). It preserves many of the drawbacks of that code.
If SLQB would replace SLAB then there would be a lot of shared code
(debugging for example). Having a generic slab allocator framework may
then be possible within which a variety of algorithms may be implemented.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists