[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0707091017210.15962@schroedinger.engr.sgi.com>
Date: Mon, 9 Jul 2007 10:26:08 -0700 (PDT)
From: Christoph Lameter <clameter@....com>
To: Andrew Morton <akpm@...ux-foundation.org>
cc: Nick Piggin <nickpiggin@...oo.com.au>, Ingo Molnar <mingo@...e.hu>,
linux-kernel@...r.kernel.org, linux-mm@...r.kernel.org,
suresh.b.siddha@...el.com, corey.d.gough@...el.com,
Pekka Enberg <penberg@...helsinki.fi>,
Matt Mackall <mpm@...enic.com>,
Denis Vlasenko <vda.linux@...glemail.com>,
Erik Andersen <andersen@...epoet.org>
Subject: Re: [patch 09/10] Remove the SLOB allocator for 2.6.23
On Mon, 9 Jul 2007, Andrew Morton wrote:
> On Mon, 9 Jul 2007 09:06:46 -0700 (PDT) Christoph Lameter <clameter@....com> wrote:
>
> > But yes the power of
> > two caches are a necessary design feature of SLAB/SLUB that allows O(1)
> > operations of kmalloc slabs which in turns causes memory wastage because
> > of rounding of the alloc to the next power of two.
>
> I've frequently wondered why we don't just create more caches for kmalloc:
> make it denser than each-power-of-2-plus-a-few-others-in-between.
>
> I assume the tradeoff here is better packing versus having a ridiculous
> number of caches. Is there any other cost?
> Because even having 1024 caches wouldn't consume a terrible amount of
> memory and I bet it would result in aggregate savings.
I have tried any number of approaches without too much success. Even one
slab cache for every 8 bytes. This creates additional admin overhead
through more control structure (that is pretty minimal but nevertheless
exists)
The main issue is that kmallocs of different size must use different
pages. If one allocates one 64 byte item and one 256 byte item and both 64
byte and 256 byte are empty then SLAB/SLUB will have to allocate 2 pages.
SLUB can fit them into one. This is basically only relevant early after
boot. The advantage goes away as the system starts to work and as more
objects are allocated in the slabs but the power-of-two slab will always
have to extend its size in page size chunks which leads to some overhead
that SLOB can avoid by placing entities of multiple size in one slab.
The tradeoff in SLOB is that is cannot be an O(1) allocator because it
has to manage these variable sized objects by traversing the lists.
I think the advantage that SLOB generates here is pretty minimal and is
easily offset by the problems of maintaining SLOB.
> Of course, a scheme which creates kmalloc caches on-demand would be better,
> but that would kill our compile-time cache selection, I suspect.
SLUB creates kmalloc caches on-demand for DMA caches already. But then we
are not allowing compile time cache selection for DMA caches.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists