[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.1105161356140.4353@chino.kir.corp.google.com>
Date: Mon, 16 May 2011 14:03:33 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Andrea Arcangeli <aarcange@...hat.com>
cc: Mel Gorman <mgorman@...e.de>,
Andrew Morton <akpm@...ux-foundation.org>,
James Bottomley <James.Bottomley@...senpartnership.com>,
Colin King <colin.king@...onical.com>,
Raghavendra D Prabhu <raghu.prabhu13@...il.com>,
Jan Kara <jack@...e.cz>, Chris Mason <chris.mason@...cle.com>,
Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
Rik van Riel <riel@...hat.com>,
Johannes Weiner <hannes@...xchg.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
linux-ext4 <linux-ext4@...r.kernel.org>
Subject: Re: [PATCH 3/3] mm: slub: Default slub_max_order to 0
On Thu, 12 May 2011, Andrea Arcangeli wrote:
> On Wed, May 11, 2011 at 01:38:47PM -0700, David Rientjes wrote:
> > kswapd and doing compaction for the higher order allocs before falling
>
> Note that patch 2 disabled compaction by clearing __GFP_WAIT.
>
> What you describe here would be patch 2 without the ~__GFP_WAIT
> addition (so keeping only ~GFP_NOFAIL).
>
It's out of context, my sentence was:
"With the previous changes in this patchset, specifically avoiding waking
kswapd and doing compaction for the higher order allocs before falling
back to the min order..."
meaning this patchset avoids waking kswapd and avoids doing compaction.
> Not clearing __GFP_WAIT when compaction is enabled is possible and
> shouldn't result in bad behavior (if compaction is not enabled with
> current SLUB it's hard to imagine how it could perform decently if
> there's fragmentation). You should try to benchmark to see if it's
> worth it on the large NUMA systems with heavy network traffic (for
> normal systems I doubt compaction is worth it but I'm not against
> trying to keep it enabled just in case).
>
The fragmentation isn't the only issue with the netperf TCP_RR benchmark,
the problem is that the slub slowpath is being used >95% of the time on
every allocation and free for the very large number of kmalloc-256 and
kmalloc-2K caches. Those caches are order 1 and 3, respectively, on my
system by default, but the page allocator seldomly gets invoked for such a
benchmark after the partial lists are populated: the overhead is from the
per-node locking required in the slowpath to traverse the partial lists.
See the data I presented two years ago: http://lkml.org/lkml/2009/3/30/15.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists