[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAPL-u-zQFRD=m3+vygesijsQT01H9BYtpDw9Q+80CUB=mdW1g@mail.gmail.com>
Date: Thu, 1 Apr 2021 17:55:13 -0700
From: Wei Xu <weixugc@...gle.com>
To: Dave Hansen <dave.hansen@...ux.intel.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
kbusch@...nel.org, shy828301@...il.com,
David Rientjes <rientjes@...gle.com>, ying.huang@...el.com,
Dan Williams <dan.j.williams@...el.com>, david@...hat.com,
osalvador@...e.de
Subject: Re: [PATCH 08/10] mm/vmscan: Consider anonymous pages without swap
On Thu, Apr 1, 2021 at 11:35 AM Dave Hansen <dave.hansen@...ux.intel.com> wrote:
>
>
> From: Keith Busch <kbusch@...nel.org>
>
> Reclaim anonymous pages if a migration path is available now that
> demotion provides a non-swap recourse for reclaiming anon pages.
>
> Note that this check is subtly different from the
> anon_should_be_aged() checks. This mechanism checks whether a
> specific page in a specific context *can* actually be reclaimed, given
> current swap space and cgroup limits
>
> anon_should_be_aged() is a much simpler and more preliminary check
> which just says whether there is a possibility of future reclaim.
>
> #Signed-off-by: Keith Busch <keith.busch@...el.com>
> Cc: Keith Busch <kbusch@...nel.org>
> Signed-off-by: Dave Hansen <dave.hansen@...ux.intel.com>
> Reviewed-by: Yang Shi <shy828301@...il.com>
> Cc: Wei Xu <weixugc@...gle.com>
> Cc: David Rientjes <rientjes@...gle.com>
> Cc: Huang Ying <ying.huang@...el.com>
> Cc: Dan Williams <dan.j.williams@...el.com>
> Cc: David Hildenbrand <david@...hat.com>
> Cc: osalvador <osalvador@...e.de>
>
> --
>
> Changes from Dave 10/2020:
> * remove 'total_swap_pages' modification
>
> Changes from Dave 06/2020:
> * rename reclaim_anon_pages()->can_reclaim_anon_pages()
>
> Note: Keith's Intel SoB is commented out because he is no
> longer at Intel and his @intel.com mail will bounce.
> ---
>
> b/mm/vmscan.c | 35 ++++++++++++++++++++++++++++++++---
> 1 file changed, 32 insertions(+), 3 deletions(-)
>
> diff -puN mm/vmscan.c~0009-mm-vmscan-Consider-anonymous-pages-without-swap mm/vmscan.c
> --- a/mm/vmscan.c~0009-mm-vmscan-Consider-anonymous-pages-without-swap 2021-03-31 15:17:19.388000242 -0700
> +++ b/mm/vmscan.c 2021-03-31 15:17:19.407000242 -0700
> @@ -287,6 +287,34 @@ static bool writeback_throttling_sane(st
> }
> #endif
>
> +static inline bool can_reclaim_anon_pages(struct mem_cgroup *memcg,
> + int node_id)
> +{
> + if (memcg == NULL) {
> + /*
> + * For non-memcg reclaim, is there
> + * space in any swap device?
> + */
> + if (get_nr_swap_pages() > 0)
> + return true;
> + } else {
> + /* Is the memcg below its swap limit? */
> + if (mem_cgroup_get_nr_swap_pages(memcg) > 0)
> + return true;
> + }
> +
> + /*
> + * The page can not be swapped.
> + *
> + * Can it be reclaimed from this node via demotion?
> + */
> + if (next_demotion_node(node_id) >= 0)
> + return true;
When neither swap space nor RECLAIM_MIGRATE is enabled, but
next_demotion_node() is configured, inactive pages cannot be swapped out
nor demoted. However, this check can still cause these pages to be sent
to shrink_page_list() (e.g., when can_reclaim_anon_pages() is called by
get_scan_count()) and make the THP pages being unnecessarily split there.
One fix would be to guard this next_demotion_node() check with the
RECLAIM_MIGRATE node_reclaim_mode check. This RECLAIM_MIGRATE
check needs to be applied to other calls to next_demotion_node() in
vmscan.c as well.
> +
> + /* No way to reclaim anon pages */
> + return false;
> +}
> +
> /*
> * This misses isolated pages which are not accounted for to save counters.
> * As the data only determines if reclaim or compaction continues, it is
> @@ -298,7 +326,7 @@ unsigned long zone_reclaimable_pages(str
>
> nr = zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_FILE) +
> zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_FILE);
> - if (get_nr_swap_pages() > 0)
> + if (can_reclaim_anon_pages(NULL, zone_to_nid(zone)))
> nr += zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_ANON) +
> zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_ANON);
>
> @@ -2323,6 +2351,7 @@ enum scan_balance {
> static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,
> unsigned long *nr)
> {
> + struct pglist_data *pgdat = lruvec_pgdat(lruvec);
> struct mem_cgroup *memcg = lruvec_memcg(lruvec);
> unsigned long anon_cost, file_cost, total_cost;
> int swappiness = mem_cgroup_swappiness(memcg);
> @@ -2333,7 +2362,7 @@ static void get_scan_count(struct lruvec
> enum lru_list lru;
>
> /* If we have no swap space, do not bother scanning anon pages. */
> - if (!sc->may_swap || mem_cgroup_get_nr_swap_pages(memcg) <= 0) {
> + if (!sc->may_swap || !can_reclaim_anon_pages(memcg, pgdat->node_id)) {
Demotion of anon pages still depends on sc->may_swap. Any thoughts on
decoupling
demotion from swapping more completely?
> scan_balance = SCAN_FILE;
> goto out;
> }
> @@ -2708,7 +2737,7 @@ static inline bool should_continue_recla
> */
> pages_for_compaction = compact_gap(sc->order);
> inactive_lru_pages = node_page_state(pgdat, NR_INACTIVE_FILE);
> - if (get_nr_swap_pages() > 0)
> + if (can_reclaim_anon_pages(NULL, pgdat->node_id))
> inactive_lru_pages += node_page_state(pgdat, NR_INACTIVE_ANON);
>
> return inactive_lru_pages > pages_for_compaction;
> _
>
Powered by blists - more mailing lists