[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151124130740.GG29014@esperanza>
Date: Tue, 24 Nov 2015 16:07:40 +0300
From: Vladimir Davydov <vdavydov@...tuozzo.com>
To: Michal Hocko <mhocko@...nel.org>
CC: Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>,
Vlastimil Babka <vbabka@...e.cz>, Mel Gorman <mgorman@...e.de>,
<linux-mm@...ck.org>, LKML <linux-kernel@...r.kernel.org>,
Michal Hocko <mhocko@...e.com>
Subject: Re: [PATCH 2/2] mm, vmscan: do not overestimate anonymous
reclaimable pages
On Tue, Nov 24, 2015 at 12:55:00PM +0100, Michal Hocko wrote:
> zone_reclaimable_pages considers all anonymous pages on LRUs reclaimable
> if there is at least one entry on the swap storage left. This can be
> really misleading when the swap is short on space and skew reclaim
> decisions based on zone_reclaimable_pages. Fix this by clamping the
> number to the minimum of the available swap space and anon LRU pages.
Suppose there's 100M of swap and 1G of anon pages. This patch makes
zone_reclaimable_pages return 100M instead of 1G in this case. If you
rotate 600M of oldest anon pages, which is quite possible,
zone_reclaimable will start returning false, which is wrong, because
there are still 400M pages that were not even scanned, besides those
600M of rotated pages could have become reclaimable after their ref bits
got cleared.
I think it is the name of zone_reclaimable_pages which is misleading. It
should be called something like "zone_scannable_pages" judging by how it
is used in zone_reclaimable.
Thanks,
Vladimir
>
> Signed-off-by: Michal Hocko <mhocko@...e.com>
> ---
> mm/vmscan.c | 13 +++++++++----
> 1 file changed, 9 insertions(+), 4 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 946d348f5040..646001a1f279 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -195,15 +195,20 @@ static bool sane_reclaim(struct scan_control *sc)
> static unsigned long zone_reclaimable_pages(struct zone *zone)
> {
> unsigned long nr;
> + long nr_swap = get_nr_swap_pages();
>
> nr = zone_page_state(zone, NR_ACTIVE_FILE) +
> zone_page_state(zone, NR_INACTIVE_FILE) +
> zone_page_state(zone, NR_ISOLATED_FILE);
>
> - if (get_nr_swap_pages() > 0)
> - nr += zone_page_state(zone, NR_ACTIVE_ANON) +
> - zone_page_state(zone, NR_INACTIVE_ANON) +
> - zone_page_state(zone, NR_ISOLATED_ANON);
> + if (nr_swap > 0) {
> + unsigned long anon;
> +
> + anon = zone_page_state(zone, NR_ACTIVE_ANON) +
> + zone_page_state(zone, NR_INACTIVE_ANON) +
> + zone_page_state(zone, NR_ISOLATED_ANON);
> + nr += min_t(unsigned long, nr_swap, anon);
> + }
>
> return nr;
> }
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists