[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110512174641.GL11579@random.random>
Date: Thu, 12 May 2011 19:46:41 +0200
From: Andrea Arcangeli <aarcange@...hat.com>
To: Christoph Lameter <cl@...ux.com>
Cc: James Bottomley <James.Bottomley@...senPartnership.com>,
Dave Jones <davej@...hat.com>, Mel Gorman <mgorman@...e.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Colin King <colin.king@...onical.com>,
Raghavendra D Prabhu <raghu.prabhu13@...il.com>,
Jan Kara <jack@...e.cz>, Chris Mason <chris.mason@...cle.com>,
Pekka Enberg <penberg@...nel.org>,
Rik van Riel <riel@...hat.com>,
Johannes Weiner <hannes@...xchg.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
linux-ext4 <linux-ext4@...r.kernel.org>
Subject: Re: [PATCH 3/3] mm: slub: Default slub_max_order to 0
On Thu, May 12, 2011 at 11:48:19AM -0500, Christoph Lameter wrote:
> Try order = 1 which gives you SLAB like interaction with the page
> allocator. Then we at least know that it is the order 2 and 3 allocs that
> are the problem and not something else.
order 1 should work better, because it's less likely we end up here
(which leaves RECLAIM_MODE_LUMPYRECLAIM on and then see what happens
at the top of page_check_references())
else if (sc->order && priority < DEF_PRIORITY - 2)
sc->reclaim_mode |= syncmode;
with order 1 more likely we end up here as enough pages are freed for
order 1 and we're safe:
else
sc->reclaim_mode = RECLAIM_MODE_SINGLE | RECLAIM_MODE_ASYNC;
None of these issue should materialize with COMPACTION=n. Even
__GFP_WAIT can be left enabled to run compaction without expecting
adverse behavior, but running compaction may still not be worth it for
small systems where the benefit of having order 1/2/3 allocation may
not outweight the cost of compaction itself.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists