[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.1107201425200.1472@router.home>
Date: Wed, 20 Jul 2011 14:28:32 -0500 (CDT)
From: Christoph Lameter <cl@...ux.com>
To: Mel Gorman <mgorman@...e.de>
cc: Andrew Morton <akpm@...ux-foundation.org>,
Minchan Kim <minchan.kim@...il.com>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] mm: page allocator: Initialise ZLC for first zone
eligible for zone_reclaim
On Wed, 20 Jul 2011, Mel Gorman wrote:
> On Wed, Jul 20, 2011 at 01:08:46PM -0500, Christoph Lameter wrote:
> > Hmmm... Looking at get_page_from_freelist and considering speeding that up
> > in general: Could we move the whole watermark logic into the slow path?
> > Only check when we refill the per cpu queues?
>
> Each CPU list can hold 186 pages (on my currently running
> kernel at least) which is 744K. As I'm running with THP enabled,
> the min watermark is 25852K so with 34 of more CPUs, there is a
> risk that a zone would be fully depleted due to lack of watermark
> checking. Bit unlikely that 34 CPUs would be on one node but the risk
> is there. Without THP, the min watermark would have been something like
> 32K where it would be much easier to accidentally consume all memory.
>
> Yes, moving the watermark checks to the slow path would be faster
> but under some conditions, the system will lock up.
Well the fastpath would simply grab a page if its on the list. If the list
is empty then we would be checking the watermarks and extract pages from
the buddylists. The pages in the per cpu lists would not be accounted for
for reclaim. Counters would reflect the buddy allocator pages available.
Reclaim flushes the per cpu pages so the buddy allocator pages would be
replenished.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists