[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0707090901410.13970@schroedinger.engr.sgi.com>
Date: Mon, 9 Jul 2007 09:06:46 -0700 (PDT)
From: Christoph Lameter <clameter@....com>
To: Nick Piggin <nickpiggin@...oo.com.au>
cc: Andrew Morton <akpm@...ux-foundation.org>,
Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org,
linux-mm@...r.kernel.org, suresh.b.siddha@...el.com,
corey.d.gough@...el.com, Pekka Enberg <penberg@...helsinki.fi>,
Matt Mackall <mpm@...enic.com>,
Denis Vlasenko <vda.linux@...glemail.com>,
Erik Andersen <andersen@...epoet.org>
Subject: Re: [patch 09/10] Remove the SLOB allocator for 2.6.23
On Mon, 9 Jul 2007, Nick Piggin wrote:
> > A reason for retaining slob would be that it has some O(n) memory saving
> > due to better packing, etc. Indeed that was the reason for merging it in
> > the first place. If slob no longer retains that advantage (wrt slub) then
> > we no longer need it.
>
> SLOB contains several significant O(1) and also O(n) memory savings that
> are so far impossible-by-design for SLUB. They are: slab external
> fragmentation is significantly reduced; kmalloc internal fragmentation is
> significantly reduced; order of magnitude smaller kmem_cache data type;
> order of magnitude less code...
Well that is only true for kmalloc objects < PAGE_SIZE and to some extend
offset by the need to keep per object data in SLUB. But yes the power of
two caches are a necessary design feature of SLAB/SLUB that allows O(1)
operations of kmalloc slabs which in turns causes memory wastage because
of rounding of the alloc to the next power of two. SLUB has less wastage
there than SLAB since it can fit power of two object tightly into a slab
instead of having to place additional control information there like SLAB.
O(n) memory savings? What is that?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists