[<prev] [next>] [day] [month] [year] [list]
Message-ID: <537E1FFC.40608@redhat.com>
Date: Thu, 22 May 2014 12:04:12 -0400
From: Rik van Riel <riel@...hat.com>
To: Mel Gorman <mgorman@...e.de>,
Andrew Morton <akpm@...ux-foundation.org>
CC: Johannes Weiner <hannes@...xchg.org>,
Hugh Dickins <hughd@...gle.com>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Dave Chinner <david@...morbit.com>,
Yuanhan Liu <yuanhan.liu@...ux.intel.com>,
Bob Liu <bob.liu@...cle.com>, Jan Kara <jack@...e.cz>,
Linux Kernel <linux-kernel@...r.kernel.org>,
Linux-MM <linux-mm@...ck.org>,
Linux-FSDevel <linux-fsdevel@...r.kernel.org>
Subject: Re: [PATCH 3/3] mm: vmscan: Use proportional scanning during direct
reclaim and full scan at DEF_PRIORITY
On 05/22/2014 05:09 AM, Mel Gorman wrote:
> Commit "mm: vmscan: obey proportional scanning requirements for kswapd"
> ensured that file/anon lists were scanned proportionally for reclaim from
> kswapd but ignored it for direct reclaim. The intent was to minimse direct
> reclaim latency but Yuanhan Liu pointer out that it substitutes one long
> stall for many small stalls and distorts aging for normal workloads like
> streaming readers/writers. Hugh Dickins pointed out that a side-effect of
> the same commit was that when one LRU list dropped to zero that the entirety
> of the other list was shrunk leading to excessive reclaim in memcgs.
> This patch scans the file/anon lists proportionally for direct reclaim
> to similarly age page whether reclaimed by kswapd or direct reclaim but
> takes care to abort reclaim if one LRU drops to zero after reclaiming the
> requested number of pages.
>
> Note that there are fewer allocation stalls even though the amount
> of direct reclaim scanning is very approximately the same.
>
> Signed-off-by: Mel Gorman <mgorman@...e.de>
Acked-by: Rik van Riel <riel@...hat.com>
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists