lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <17f04d65-020e-a52d-41e8-0d540e32b475@loongson.cn>
Date:   Tue, 20 Sep 2022 14:12:17 +0800
From:   Hongchen Zhang <zhanghongchen@...ngson.cn>
To:     Johannes Weiner <hannes@...xchg.org>,
        Andrew Morton <akpm@...ux-foundation.org>
Cc:     Mel Gorman <mgorman@...hsingularity.net>,
        Hugh Dickins <hughd@...gle.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, kernel-team@...com,
        Yosry Ahmed <yosryahmed@...gle.com>
Subject: Re: [PATCH] mm: vmscan: fix extreme overreclaim and swap floods

Hi Johannes,

On 2022/8/3 am 12:28, Johannes Weiner wrote:
> During proactive reclaim, we sometimes observe severe overreclaim,
> with several thousand times more pages reclaimed than requested.
> 
> This trace was obtained from shrink_lruvec() during such an instance:
> 
>      prio:0 anon_cost:1141521 file_cost:7767
>      nr_reclaimed:4387406 nr_to_reclaim:1047 (or_factor:4190)
>      nr=[7161123 345 578 1111]
> 
> While he reclaimer requested 4M, vmscan reclaimed close to 16G, most
> of it by swapping. These requests take over a minute, during which the
> write() to memory.reclaim is unkillably stuck inside the kernel.
> 
> Digging into the source, this is caused by the proportional reclaim
> bailout logic. This code tries to resolve a fundamental conflict: to
> reclaim roughly what was requested, while also aging all LRUs fairly
> and in accordance to their size, swappiness, refault rates etc. The
> way it attempts fairness is that once the reclaim goal has been
> reached, it stops scanning the LRUs with the smaller remaining scan
> targets, and adjusts the remainder of the bigger LRUs according to how
> much of the smaller LRUs was scanned. It then finishes scanning that
> remainder regardless of the reclaim goal.
> 
> This works fine if priority levels are low and the LRU lists are
> comparable in size. However, in this instance, the cgroup that is
> targeted by proactive reclaim has almost no files left - they've
> already been squeezed out by proactive reclaim earlier - and the
> remaining anon pages are hot. Anon rotations cause the priority level
> to drop to 0, which results in reclaim targeting all of anon (a lot)
> and all of file (almost nothing). By the time reclaim decides to bail,
> it has scanned most or all of the file target, and therefor must also
> scan most or all of the enormous anon target. This target is thousands
> of times larger than the reclaim goal, thus causing the overreclaim.
> 
> The bailout code hasn't changed in years, why is this failing now?
> The most likely explanations are two other recent changes in anon
> reclaim:
> 
> 1. Before the series starting with commit 5df741963d52 ("mm: fix LRU
>     balancing effect of new transparent huge pages"), the VM was
>     overall relatively reluctant to swap at all, even if swap was
>     configured. This means the LRU balancing code didn't come into play
>     as often as it does now, and mostly in high pressure situations
>     where pronounced swap activity wouldn't be as surprising.
> 
> 2. For historic reasons, shrink_lruvec() loops on the scan targets of
>     all LRU lists except the active anon one, meaning it would bail if
>     the only remaining pages to scan were active anon - even if there
>     were a lot of them.
> 
>     Before the series starting with commit ccc5dc67340c ("mm/vmscan:
>     make active/inactive ratio as 1:1 for anon lru"), most anon pages
>     would live on the active LRU; the inactive one would contain only a
>     handful of preselected reclaim candidates. After the series, anon
>     gets aged similarly to file, and the inactive list is the default
>     for new anon pages as well, making it often the much bigger list.
> 
>     As a result, the VM is now more likely to actually finish large
>     anon targets than before.
> 
> Change the code such that only one SWAP_CLUSTER_MAX-sized nudge toward
> the larger LRU lists is made before bailing out on a met reclaim goal.
> 
> This fixes the extreme overreclaim problem.
> 
> Fairness is more subtle and harder to evaluate. No obvious misbehavior
> was observed on the test workload, in any case. Conceptually, fairness
> should primarily be a cumulative effect from regular, lower priority
> scans. Once the VM is in trouble and needs to escalate scan targets to
> make forward progress, fairness needs to take a backseat. This is also
> acknowledged by the myriad exceptions in get_scan_count(). This patch
> makes fairness decrease gradually, as it keeps fairness work static
> over increasing priority levels with growing scan targets. This should
> make more sense - although we may have to re-visit the exact values.
> 
> Signed-off-by: Johannes Weiner <hannes@...xchg.org>
> ---
>   mm/vmscan.c | 10 ++++------
>   1 file changed, 4 insertions(+), 6 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index f7d9a683e3a7..1cc0c6666787 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2897,8 +2897,8 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
>   	enum lru_list lru;
>   	unsigned long nr_reclaimed = 0;
>   	unsigned long nr_to_reclaim = sc->nr_to_reclaim;
> +	bool proportional_reclaim;
>   	struct blk_plug plug;
> -	bool scan_adjusted;
>   
>   	get_scan_count(lruvec, sc, nr);
>   
> @@ -2916,8 +2916,8 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
>   	 * abort proportional reclaim if either the file or anon lru has already
>   	 * dropped to zero at the first pass.
>   	 */
> -	scan_adjusted = (!cgroup_reclaim(sc) && !current_is_kswapd() &&
> -			 sc->priority == DEF_PRIORITY);
> +	proportional_reclaim = (!cgroup_reclaim(sc) && !current_is_kswapd() &&
> +				sc->priority == DEF_PRIORITY);
>   
>   	blk_start_plug(&plug);
>   	while (nr[LRU_INACTIVE_ANON] || nr[LRU_ACTIVE_FILE] ||
> @@ -2937,7 +2937,7 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
>   
>   		cond_resched();
>   
> -		if (nr_reclaimed < nr_to_reclaim || scan_adjusted)
> +		if (nr_reclaimed < nr_to_reclaim || proportional_reclaim)
>   			continue;
>   
>   		/*
> @@ -2988,8 +2988,6 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
>   		nr_scanned = targets[lru] - nr[lru];
>   		nr[lru] = targets[lru] * (100 - percentage) / 100;
>   		nr[lru] -= min(nr[lru], nr_scanned);
We should not just remove the following line because kswapd may also 
call this function and there is no side effect to do scan adjust for 
kswapd,so it may be better to change like this:
+    if (current_is_kswapd())
		scan_adjusted = true;
> -
> -		scan_adjusted = true
>   	}
>   	blk_finish_plug(&plug);
>   	sc->nr_reclaimed += nr_reclaimed;
> 

Thanks
Hongchen Zhang

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ