[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1247217548.771.10.camel@penberg-laptop>
Date: Fri, 10 Jul 2009 12:19:08 +0300
From: Pekka Enberg <penberg@...helsinki.fi>
To: David Rientjes <rientjes@...gle.com>
Cc: Ingo Molnar <mingo@...e.hu>, Janboe Ye <yuan-bo.ye@...orola.com>,
linux-kernel@...r.kernel.org, vegard.nossum@...il.com,
graydon@...hat.com, fche@...hat.com, Nick Piggin <npiggin@...e.de>,
cl@...ux-foundation.org
Subject: Re: [RFC][PATCH] Check write to slab memory which freed already
using mudflap
Hi David,
On Fri, 10 Jul 2009, Ingo Molnar wrote:
> > > SLAB is (slowly) going away so you might want to port this to SLUB
> > > as well so we can merge both.
> >
> > and SLQB which will replace both? :-/
On Fri, 2009-07-10 at 02:04 -0700, David Rientjes wrote:
> I'm not sure what the status of slqb is, although I would have expected it
> to have been pushed for inclusion in 2.6.31 as a slab allocator
> alternative. Nick, any forecast for inclusion?
2.6.32 most likely. Nick has fixed a bunch of problems but there's still
one ppc boot time bug that's turning out to be hard to find.
On Fri, 2009-07-10 at 02:04 -0700, David Rientjes wrote:
> SLUB has a pretty noticeable performance degradation on benchmarks such as
> netperf TCP_RR with high numbers of threads (see my post about it:
> http://marc.info/?l=linux-kernel&m=123839191416472). CONFIG_SLAB is the
> optimal configuration for workloads that share similiar slab thrashing
> patterns (which my patchset dealt with in an indirect way and yet still
> didn't match slab's performance). I haven't yet seen data that suggests
> anything other than CONFIG_SLAB has parity with such a benchmark.
As I said before, I'm interesting in getting those patches merged. I
think Christoph raised some issues that need to be take care of before
we can do that, no?
Pekka
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists