[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <84144f020712090020o5bdeb54fqaa9e6578bd066f29@mail.gmail.com>
Date: Sun, 9 Dec 2007 10:20:19 +0200
From: "Pekka Enberg" <penberg@...helsinki.fi>
To: "Ingo Molnar" <mingo@...e.hu>
Cc: "Linus Torvalds" <torvalds@...ux-foundation.org>,
"Andrew Morton" <akpm@...ux-foundation.org>,
"Matt Mackall" <mpm@...enic.com>,
"Rafael J. Wysocki" <rjw@...k.pl>,
LKML <linux-kernel@...r.kernel.org>,
"Christoph Lameter" <clameter@....com>
Subject: Re: tipc_init(), WARNING: at arch/x86/mm/highmem_32.c:52, [2.6.24-rc4-git5: Reported regressions from 2.6.23]
Hi Ingo,
On Dec 8, 2007 10:29 PM, Ingo Molnar <mingo@...e.hu> wrote:
> so it has a "free list", which is clearly per cpu. Hang on! Isnt that
> actually a per CPU queue? Which SLUB has not, we are told? The "U" in
> SLUB. How on earth can an allocator in 2007 claim to have no queuing
> (which is in essence caching)? Am i on crack with this? Did i miss
> something really obvious?
I think you did. The difference is explained in Christoph's announcement:
"A particular concern was the complex management of the numerous
object queues in SLAB. SLUB has no such queues. Instead we dedicate a
slab for each allocating CPU and use objects from a slab directly
instead of queueing them up."
Which, I think, is where SLUB gets its name from (the "unqueued" part).
Now, while SLAB code is "pleasant and straightforward code" (thanks,
btw) for UMA, it's really hairy for NUMA plus the "alien caches" eat
tons of memory (which is why Christoph wrote SLUB in the first place,
the current code in SLAB is mostly unfixable due to its *queuing*
nature).
I don't object changing the default to CONFIG_SLAB but it's not really
a long term strategy unless we want to have three kmalloc's in the
kernel: one for embedded, one for UMA, and one NUMA.
Pekka
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists