[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081002143508.GE11089@brain>
Date: Thu, 2 Oct 2008 15:35:08 +0100
From: Andy Whitcroft <apw@...dowen.org>
To: Christoph Lameter <cl@...ux-foundation.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Peter Zijlstra <peterz@...radead.org>,
Rik van Riel <riel@...hat.com>, Mel Gorman <mel@....ul.ie>,
Nick Piggin <nickpiggin@...oo.com.au>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH 4/4] capture pages freed during direct reclaim for
allocation by the reclaimer
On Wed, Oct 01, 2008 at 10:01:46AM -0500, Christoph Lameter wrote:
> Andy Whitcroft wrote:
> > When a process enters direct reclaim it will expend effort identifying
> > and releasing pages in the hope of obtaining a page. However as these
> > pages are released asynchronously there is every possibility that the
> > pages will have been consumed by other allocators before the reclaimer
> > gets a look in. This is particularly problematic where the reclaimer is
> > attempting to allocate a higher order page. It is highly likely that
> > a parallel allocation will consume lower order constituent pages as we
> > release them preventing them coelescing into the higher order page the
> > reclaimer desires.
>
> The reclaim problem is due to the pcp queueing right? Could we disable pcp
> queueing during reclaim? pcp processing is not necessarily a gain, so
> temporarily disabling it should not be a problem.
>
> At the beginning of reclaim just flush all pcp pages and then do not allow pcp
> refills again until reclaim is finished?
Not entirely, some pages could get trapped there for sure. But it is
parallel allocations we are trying to guard against. Plus we already flush
the pcp during reclaim for higher orders.
-apw
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists