[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070709215724.GV11115@waste.org>
Date: Mon, 9 Jul 2007 16:57:24 -0500
From: Matt Mackall <mpm@...enic.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Ingo Molnar <mingo@...e.hu>, Christoph Lameter <clameter@....com>,
linux-kernel@...r.kernel.org, linux-mm@...r.kernel.org,
suresh.b.siddha@...el.com, corey.d.gough@...el.com,
Pekka Enberg <penberg@...helsinki.fi>
Subject: Re: [patch 09/10] Remove the SLOB allocator for 2.6.23
On Sun, Jul 08, 2007 at 11:02:24AM -0700, Andrew Morton wrote:
> Guys, look at this the other way. Suppose we only had slub, and someone
> came along and said "here's a whole new allocator which saves 4.5k of
> text", would we merge it on that basis? Hell no, it's not worth it. What
> we might do is to get motivated to see if we can make slub less porky under
> appropriate config settings.
Well I think we would obviously throw out SLAB and SLUB if they
weren't somewhat faster than SLOB. They're much more problematic and
one of the big features that Christoph's pushing is a fix for a
problem that SLOB simply doesn't have: huge numbers of SLAB/SLUB pages
being held down by small numbers of objects.
> Let's not get sentimental about these things: in general, if there's any
> reasonable way in which we can rid ourselves of any code at all, we should
> do so, no?
I keep suggesting a Voyager Replacement Fund, but James isn't
interested.
But seriously, I don't think it should be at all surprising that the
allocator that's most appropriate for machines with < 32MB of RAM is
different than the one for machines with > 1TB of RAM.
The maintenance overhead of SLOB is fairly minimal. The biggest
outstanding SLOB problem is nommu's rather broken memory size
reporting.
--
Mathematics is the supreme nostalgia of our time.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists