[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0709101201140.24491@schroedinger.engr.sgi.com>
Date: Mon, 10 Sep 2007 12:07:46 -0700 (PDT)
From: Christoph Lameter <clameter@....com>
To: Nick Piggin <nickpiggin@...oo.com.au>
cc: "Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>, mingo@...e.hu,
Mel Gorman <mel@...net.ie>
Subject: Re: tbench regression - Why process scheduler has impact on tbench
and why small per-cpu slab (SLUB) cache creates the scenario?
On Mon, 10 Sep 2007, Nick Piggin wrote:
> OK, so after isolating the scheduler, then SLUB should be as fast as SLAB
> at the same allocation size. That's basically what we need to do before we
> can replace SLAB with it, I think?
The regression is due to the limited number of objects in the per cpu
"queue" in SLUB for 4k objects. With the .23 code this is one or two
(order 1 slab). So we have to call into the page allocator frequently and
do it for order 1 pages which requires the zone locks. Urgh.
I think the regression is best addressed by the page allocator pass
through patch in mm which makes the page allocator handle these objects.
They are single pages so the pcp lists are in use which provide much
larger queues than SLUB/SLAB.
IMHO >=4k objects should be handled by the page allocator. From the
numbers I have seen there is then still a 1% regression left. If
that is still the case after we have fixed the scheduler then maybe
we need to slim down the page allocator fast path.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists