[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1291171667.12777.51.camel@sli10-conroe>
Date: Wed, 01 Dec 2010 10:47:47 +0800
From: Shaohua Li <shaohua.li@...el.com>
To: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
Cc: Mel Gorman <mel@....ul.ie>, Simon Kirby <sim@...tway.ca>,
Dave Hansen <dave@...ux.vnet.ibm.com>,
linux-mm <linux-mm@...ck.org>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/3] mm: kswapd: Stop high-order balancing when any
suitable zone is balanced
On Wed, 2010-12-01 at 10:23 +0800, KOSAKI Motohiro wrote:
> > On Wed, 2010-12-01 at 01:15 +0800, Mel Gorman wrote:
> > > When the allocator enters its slow path, kswapd is woken up to balance the
> > > node. It continues working until all zones within the node are balanced. For
> > > order-0 allocations, this makes perfect sense but for higher orders it can
> > > have unintended side-effects. If the zone sizes are imbalanced, kswapd
> > > may reclaim heavily on a smaller zone discarding an excessive number of
> > > pages. The user-visible behaviour is that kswapd is awake and reclaiming
> > > even though plenty of pages are free from a suitable zone.
> > >
> > > This patch alters the "balance" logic to stop kswapd if any suitable zone
> > > becomes balanced to reduce the number of pages it reclaims from other zones.
> > from my understanding, the patch will break reclaim high zone if a low
> > zone meets the high order allocation, even the high zone doesn't meet
> > the high order allocation. This, for example, will make a high order
> > allocation from a high zone fallback to low zone and quickly exhaust low
> > zone, for example DMA. This will break some drivers.
>
> Have you seen patch [3/3]? I think it migigate your pointed issue.
yes, it improves a lot, but still possible for small systems.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists