lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAmzW4OBYPkjPBdhV5H-yhsmzisw8ZdTTsj=QzbW8grY8RqkJQ@mail.gmail.com>
Date:   Fri, 12 Aug 2022 10:59:12 +0900
From:   Joonsoo Kim <js1304@...il.com>
To:     Johannes Weiner <hannes@...xchg.org>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Hugh Dickins <hughd@...gle.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Linux Memory Management List <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>, kernel-team@...com
Subject: Re: [PATCH] mm: vmscan: fix extreme overreclaim and swap floods

2022년 8월 3일 (수) 오전 1:28, Johannes Weiner <hannes@...xchg.org>님이 작성:
>
> During proactive reclaim, we sometimes observe severe overreclaim,
> with several thousand times more pages reclaimed than requested.
>
> This trace was obtained from shrink_lruvec() during such an instance:
>
>     prio:0 anon_cost:1141521 file_cost:7767
>     nr_reclaimed:4387406 nr_to_reclaim:1047 (or_factor:4190)
>     nr=[7161123 345 578 1111]
>
> While he reclaimer requested 4M, vmscan reclaimed close to 16G, most
> of it by swapping. These requests take over a minute, during which the
> write() to memory.reclaim is unkillably stuck inside the kernel.
>
> Digging into the source, this is caused by the proportional reclaim
> bailout logic. This code tries to resolve a fundamental conflict: to
> reclaim roughly what was requested, while also aging all LRUs fairly
> and in accordance to their size, swappiness, refault rates etc. The
> way it attempts fairness is that once the reclaim goal has been
> reached, it stops scanning the LRUs with the smaller remaining scan
> targets, and adjusts the remainder of the bigger LRUs according to how
> much of the smaller LRUs was scanned. It then finishes scanning that
> remainder regardless of the reclaim goal.
>
> This works fine if priority levels are low and the LRU lists are
> comparable in size. However, in this instance, the cgroup that is
> targeted by proactive reclaim has almost no files left - they've
> already been squeezed out by proactive reclaim earlier - and the
> remaining anon pages are hot. Anon rotations cause the priority level
> to drop to 0, which results in reclaim targeting all of anon (a lot)
> and all of file (almost nothing). By the time reclaim decides to bail,
> it has scanned most or all of the file target, and therefor must also
> scan most or all of the enormous anon target. This target is thousands
> of times larger than the reclaim goal, thus causing the overreclaim.
>
> The bailout code hasn't changed in years, why is this failing now?
> The most likely explanations are two other recent changes in anon
> reclaim:
>
> 1. Before the series starting with commit 5df741963d52 ("mm: fix LRU
>    balancing effect of new transparent huge pages"), the VM was
>    overall relatively reluctant to swap at all, even if swap was
>    configured. This means the LRU balancing code didn't come into play
>    as often as it does now, and mostly in high pressure situations
>    where pronounced swap activity wouldn't be as surprising.
>
> 2. For historic reasons, shrink_lruvec() loops on the scan targets of
>    all LRU lists except the active anon one, meaning it would bail if
>    the only remaining pages to scan were active anon - even if there
>    were a lot of them.
>
>    Before the series starting with commit ccc5dc67340c ("mm/vmscan:
>    make active/inactive ratio as 1:1 for anon lru"), most anon pages
>    would live on the active LRU; the inactive one would contain only a
>    handful of preselected reclaim candidates. After the series, anon
>    gets aged similarly to file, and the inactive list is the default
>    for new anon pages as well, making it often the much bigger list.
>
>    As a result, the VM is now more likely to actually finish large
>    anon targets than before.
>
> Change the code such that only one SWAP_CLUSTER_MAX-sized nudge toward
> the larger LRU lists is made before bailing out on a met reclaim goal.
>
> This fixes the extreme overreclaim problem.

I think that we can fix the issue without breaking the fairness.
Key idea is that doing scan based on the lru having max scan count.
(aka max-lru)
As scan is doing on max-lru, do scan the proportional number of
pages on other lru.

Pseudo code is here.

1. find the lru having max scan count
2. calculate nr_to_scan_max for max-lru
3. prop = (scanned[max-lru] + nr_to_scan_max) / targets[max-lru]
3. for_each_lru()
3-1. nr_to_scan = (targets[lru] * prop) - scanned[lru]
3-2. shrink_list(nr_to_scan)

With this approach, we can minimize reclaim without breaking the
fairness.

Note that actual code needs to handle some corner cases, one of it is
a low-nr_to_scan case to improve performance.

Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ