[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.0907100223140.14601@chino.kir.corp.google.com>
Date: Fri, 10 Jul 2009 02:31:43 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Pekka Enberg <penberg@...helsinki.fi>
cc: Ingo Molnar <mingo@...e.hu>, Janboe Ye <yuan-bo.ye@...orola.com>,
linux-kernel@...r.kernel.org, vegard.nossum@...il.com,
graydon@...hat.com, fche@...hat.com, Nick Piggin <npiggin@...e.de>,
cl@...ux-foundation.org
Subject: Re: [RFC][PATCH] Check write to slab memory which freed already
using mudflap
On Fri, 10 Jul 2009, Pekka Enberg wrote:
> > I'm not sure what the status of slqb is, although I would have expected it
> > to have been pushed for inclusion in 2.6.31 as a slab allocator
> > alternative. Nick, any forecast for inclusion?
>
> 2.6.32 most likely. Nick has fixed a bunch of problems but there's still
> one ppc boot time bug that's turning out to be hard to find.
>
Ah, ok, there's still outstanding bugs. I was curious as to why it wasn't
merged as a non-default option that would have perhaps attracted more
attention to it.
> > SLUB has a pretty noticeable performance degradation on benchmarks such as
> > netperf TCP_RR with high numbers of threads (see my post about it:
> > http://marc.info/?l=linux-kernel&m=123839191416472). CONFIG_SLAB is the
> > optimal configuration for workloads that share similiar slab thrashing
> > patterns (which my patchset dealt with in an indirect way and yet still
> > didn't match slab's performance). I haven't yet seen data that suggests
> > anything other than CONFIG_SLAB has parity with such a benchmark.
>
> As I said before, I'm interesting in getting those patches merged. I
> think Christoph raised some issues that need to be take care of before
> we can do that, no?
>
The issue was the addition of an increment to the freeing fastpath and
some arithemetic in the allocation slowpath that would have negatively
affected performance for caches that don't suffer from the issue, even
under affected benchmarks such as netperf TCP_RR.
Even ignoring the impact on workloads that don't suffer from these
patterns, parity with slab still unfortunately isn't reached with the
patchset. It also diverges from the fundamental design of slub which is
optimized by filling up partial slabs as quickly as possible to minimize
the scanning contention on list_lock for high cpu counts and less memory
consumption overall by reducing slab internal fragmentation.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists