lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACSyD1M9v6S6UVPLdPuoBKBMAphWrR-xsegRc6=_TKxMqu1MJg@mail.gmail.com>
Date: Fri, 14 Mar 2025 18:35:07 +0800
From: Zhongkun He <hezhongkun.hzk@...edance.com>
To: Michal Hocko <mhocko@...e.com>
Cc: akpm@...ux-foundation.org, hannes@...xchg.org, muchun.song@...ux.dev, 
	yosry.ahmed@...ux.dev, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [External] Re: [PATCH V2] mm: vmscan: skip the file folios in
 proactive reclaim if swappiness is MAX

On Fri, Mar 14, 2025 at 5:28 PM Michal Hocko <mhocko@...e.com> wrote:
>
> On Fri 14-03-25 09:52:45, Michal Hocko wrote:
> > On Fri 14-03-25 11:33:50, Zhongkun He wrote:
> > > With this patch 'commit <68cd9050d871> ("mm: add swappiness= arg to
> > > memory.reclaim")', we can submit an additional swappiness=<val> argument
> > > to memory.reclaim. It is very useful because we can dynamically adjust
> > > the reclamation ratio based on the anonymous folios and file folios of
> > > each cgroup. For example,when swappiness is set to 0, we only reclaim
> > > from file folios.
> > >
> > > However,we have also encountered a new issue: when swappiness is set to
> > > the MAX_SWAPPINESS, it may still only reclaim file folios. This is due
> > > to the knob of cache_trim_mode, which depends solely on the ratio of
> > > inactive folios, regardless of whether there are a large number of cold
> > > folios in anonymous folio list.
> > >
> > > So, we hope to add a new control logic where proactive memory reclaim only
> > > reclaims from anonymous folios when swappiness is set to MAX_SWAPPINESS.
> > > For example, something like this:
> > >
> > > echo "2M swappiness=200" > /sys/fs/cgroup/memory.reclaim
> > >
> > > will perform reclaim on the rootcg with a swappiness setting of 200 (max
> > > swappiness) regardless of the file folios. Users have a more comprehensive
> > > view of the application's memory distribution because there are many
> > > metrics available. For example, if we find that a certain cgroup has a
> > > large number of inactive anon folios, we can reclaim only those and skip
> > > file folios, because with the zram/zswap, the IO tradeoff that
> > > cache_trim_mode is making doesn't hold - file refaults will cause IO,
> > > whereas anon decompression will not.
> > >
> > > With this patch, the swappiness argument of memory.reclaim has a more
> > > precise semantics: 0 means reclaiming only from file pages, while 200
> > > means reclaiming just from anonymous pages.
> >
> > Haven't you said you will try a slightly different approach and always
> > bypass LRU balancing heuristics for pro-active reclaim and swappiness
> > provided? What has happened with that?
>
> I have just noticed that you have followed up [1] with a concern that
> using swappiness in the whole min-max range without any heuristics turns
> out to be harder than just relying on the min and max as extremes.
> What seems to be still missing (or maybe it is just me not seeing that)
> is why should we only enforce those extreme ends of the range and still
> preserve under-defined semantic for all other swappiness values in the
> pro-active reclaim.
>

Yes, you are right.
There is a demo if we bypass LRU balancing heuristics in pro reclaim.
I have a question, but I'm not sure if it should be considered. For example,
if anon scan=5 and swappiness=5, then 5*5/200=0. The scan becomes zero.
Do you have any suggestions?

diff --git a/mm/vmscan.c b/mm/vmscan.c
index f4312b41e0e0..75935fe42245 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2448,6 +2448,19 @@ static void get_scan_count(struct lruvec
*lruvec, struct scan_control *sc,
                goto out;
        }

+       /*
+        * Bypassing LRU balancing heuristics for proactive memory
+        * reclaim to make the semantic of swappiness clearer in
+        * memory.reclaim.
+        */
+       if (sc->proactive && sc->proactive_swappiness) {
+               scan_balance = SCAN_FRACT;
+               fraction[0] = swappiness;
+               fraction[1] = MAX_SWAPPINESS - swappiness;
+               denominator = MAX_SWAPPINESS;
+               goto out;
+       }
+
        /*
         * Do not apply any pressure balancing cleverness when the
         * system is close to OOM, scan both anon and file equally


Additionally, any feedback from others is welcome.

Thanks.

> [1] https://lore.kernel.org/all/CACSyD1OHD8oXQcQmi1D9t2f5oeMVDvCQnYZUMQTGbqBz4YYKLQ@mail.gmail.com/T/#u
> --
> Michal Hocko
> SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ