[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4F0CE313.1090008@redhat.com>
Date: Tue, 10 Jan 2012 20:17:07 -0500
From: Rik van Riel <riel@...hat.com>
To: Minchan Kim <minchan@...nel.org>
CC: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Dave Chinner <david@...morbit.com>,
nowhere <nowhere@...kenden.ath.cx>,
Michal Hocko <mhocko@...e.cz>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: Kswapd in 3.2.0-rc5 is a CPU hog
On 12/26/2011 10:57 PM, Minchan Kim wrote:
> I guess it's caused by small NORMAL zone.
> The scenario I think is as follows,
I guess it is exaggerated by a small NORMAL zone. Even on my
system, where the NORMAL zone is about 3x as large as the DMA32
zone, I can see that the pages in the NORMAL zone get recycled
slightly faster than those in the DMA32 zone...
> 1. dd comsumes memory in NORMAL zone
> 2. dd enter direct reclaim and wakeup kswapd
> 3. kswapd reclaims some memory in NORMAL zone until it reclaims high wamrk
> 4. schedule
> 5. dd consumes memory again in NORMAL zone
> 6. kswapd fail to reclaim memory by high watermark due to 5.
> 7. loop again, goto 3.
>
> The point is speed between reclaim VS memory consumption.
> So kswapd cannot reach a point which enough pages are in NORMAL zone.
I wonder if it would make sense for kswapd to count how many
pages it needs to free in each zone (at step 2), and then stop
reclaiming once it has freed that many pages.
That could leave the NORMAL (or HIGHMEM, on 32 bit) zone below
the watermark, but as long as the other zones are still good,
allocations can proceed just fine.
It would have the disadvantage of kswapd stopping reclaim with
possibly a zone below the watermark, but the advantage of
better balancing of allocations between zones.
Does this idea make sense?
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists