[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20120615025032.GA8250@localhost>
Date: Fri, 15 Jun 2012 10:50:32 +0800
From: Fengguang Wu <fengguang.wu@...el.com>
To: Wanpeng Li <liwp.linux@...il.com>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Mel Gorman <mgorman@...e.de>, Minchan Kim <minchan@...nel.org>,
Hugh Dickins <hughd@...gle.com>, linux-kernel@...r.kernel.org,
trivial@...nel.org, Gavin Shan <shangw@...ux.vnet.ibm.com>
Subject: Re: [PATCH] mm/vmscan: cleanup on the comments of
do_try_to_free_pages
On Fri, Jun 15, 2012 at 09:25:24AM +0800, Wanpeng Li wrote:
> From: Wanpeng Li <liwp@...ux.vnet.ibm.com>
>
> Since lumpy reclaim algorithm is removed by Mel Gorman, cleanup the
> footprint of lumpy reclaim.
I think the "lumpy writeout" here does not mean "lumpy reclaim" :-)
> @@ -2065,8 +2065,9 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
> * Try to write back as many pages as we just scanned. This
> * tends to cause slow streaming writers to write data to the
> * disk smoothly, at the dirtying rate, which is nice. But
> - * that's undesirable in laptop mode, where we *want* lumpy
> - * writeout. So in laptop mode, write out the whole world.
> + * that's undesirable in laptop mode, where as much I/O as
> + * possible should be trigged if the disk needs to be spun up.
> + * So in laptop mode, write out the whole world.
> */
> writeback_threshold = sc->nr_to_reclaim + sc->nr_to_reclaim / 2;
> if (total_scanned > writeback_threshold) {
> --
> 1.7.9.5
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists