[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <75031785-fd9b-8ed2-54ae-c12874d3df5f@suse.cz>
Date: Fri, 17 Jun 2016 12:55:11 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Mel Gorman <mgorman@...hsingularity.net>,
Andrew Morton <akpm@...ux-foundation.org>,
Linux-MM <linux-mm@...ck.org>
Cc: Rik van Riel <riel@...riel.com>,
Johannes Weiner <hannes@...xchg.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 22/27] mm: Convert zone_reclaim to node_reclaim
On 06/09/2016 08:04 PM, Mel Gorman wrote:
> As reclaim is now per-node based, convert zone_reclaim to be node_reclaim.
> It is possible that a node will be reclaimed multiple times if it has
> multiple zones but this is unavoidable without caching all nodes traversed
> so far. The documentation and interface to userspace is the same from
> a configuration perspective and will will be similar in behaviour unless
> the node-local allocation requests were also limited to lower zones.
>
> Signed-off-by: Mel Gorman <mgorman@...hsingularity.net>
Acked-by: Vlastimil Babka <vbabka@...e.cz>
[...]
> @@ -682,6 +674,14 @@ typedef struct pglist_data {
> */
> unsigned long totalreserve_pages;
>
> +#ifdef CONFIG_NUMA
> + /*
> + * zone reclaim becomes active if more unmapped pages exist.
node reclaim
> + */
> + unsigned long min_unmapped_pages;
> + unsigned long min_slab_pages;
> +#endif /* CONFIG_NUMA */
> +
> /* Write-intensive fields used from the page allocator */
> ZONE_PADDING(_pad1_)
> spinlock_t lru_lock;
[...]
> @@ -3580,7 +3580,7 @@ static inline unsigned long node_unmapped_file_pages(struct pglist_data *pgdat)
> }
>
> /* Work out how many page cache pages we can reclaim in this reclaim_mode */
> -static unsigned long zone_pagecache_reclaimable(struct zone *zone)
> +static unsigned long zone_pagecache_reclaimable(struct pglist_data *pgdat)
Rename to node_pagecache_reclaimable?
Powered by blists - more mailing lists