[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1199933196.6245.125.camel@cinder.waste.org>
Date: Wed, 09 Jan 2008 20:46:36 -0600
From: Matt Mackall <mpm@...enic.com>
To: Pekka J Enberg <penberg@...helsinki.fi>
Cc: Christoph Lameter <clameter@....com>, Ingo Molnar <mingo@...e.hu>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Hugh Dickins <hugh@...itas.com>,
Andi Kleen <andi@...stfloor.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH] greatly reduce SLOB external fragmentation
On Thu, 2008-01-10 at 00:43 +0200, Pekka J Enberg wrote:
> Hi Matt,
>
> On Wed, 9 Jan 2008, Matt Mackall wrote:
> > I kicked this around for a while, slept on it, and then came up with
> > this little hack first thing this morning:
> >
> > ------------
> > slob: split free list by size
> >
>
> [snip]
>
> > And the results are fairly miraculous, so please double-check them on
> > your setup. The resulting statistics change to this:
>
> [snip]
>
> > So the average jumped by 714k from before the patch, became much more
> > stable, and beat SLUB by 287k. There are also 7 perfectly filled pages
> > now, up from 1 before. And we can't get a whole lot better than this:
> > we're using 259 pages for 244 pages of actual data, so our total
> > overhead is only 6%! For comparison, SLUB's using about 70 pages more
> > for the same data, so its total overhead appears to be about 35%.
>
> Unfortunately I only see a slight improvement to SLOB (but it still gets
> beaten by SLUB):
>
> [ the minimum, maximum, and average are captured from 10 individual runs ]
>
> Free (kB) Used (kB)
> Total (kB) min max average min max average
> SLUB (no debug) 26536 23868 23892 23877.6 2644 2668 2658.4
> SLOB (patched) 26548 23456 23708 23603.2 2840 3092 2944.8
> SLOB (vanilla) 26548 23472 23640 23579.6 2908 3076 2968.4
> SLAB (no debug) 26544 23316 23364 23343.2 3180 3228 3200.8
> SLUB (with debug) 26484 23120 23136 23127.2 3348 3364 3356.8
With your kernel config and my lguest+busybox setup, I get:
SLUB:
MemFree: 24208 kB
MemFree: 24212 kB
MemFree: 24212 kB
MemFree: 24212 kB
MemFree: 24216 kB
MemFree: 24216 kB
MemFree: 24220 kB
MemFree: 24220 kB
MemFree: 24224 kB
MemFree: 24232 kB
avg: 24217.2
SLOB with two lists:
MemFree: 24204 kB
MemFree: 24260 kB
MemFree: 24260 kB
MemFree: 24276 kB
MemFree: 24288 kB
MemFree: 24292 kB
MemFree: 24312 kB
MemFree: 24320 kB
MemFree: 24336 kB
MemFree: 24396 kB
avg: 24294.4
Not sure why this result is so different from yours.
Hacked this up to three lists to experiment and we now have:
MemFree: 24348 kB
MemFree: 24372 kB
MemFree: 24372 kB
MemFree: 24372 kB
MemFree: 24372 kB
MemFree: 24380 kB
MemFree: 24384 kB
MemFree: 24404 kB
MemFree: 24404 kB
MemFree: 24408 kB
avg: 24344.4
Even the last version is still using about 250 pages of storage for 209
pages of data, so it's got about 20% overhead still.
--
Mathematics is the supreme nostalgia of our time.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists