lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKEwX=MwrRc43iM2050v5u-TPUK4Yn+a4G7+h6ieKhpQ7WtQ=A@mail.gmail.com>
Date: Tue, 28 May 2024 08:09:57 -0700
From: Nhat Pham <nphamcs@...il.com>
To: Takero Funaki <flintglass@...il.com>
Cc: Johannes Weiner <hannes@...xchg.org>, Yosry Ahmed <yosryahmed@...gle.com>, 
	Chengming Zhou <chengming.zhou@...ux.dev>, Jonathan Corbet <corbet@....net>, 
	Andrew Morton <akpm@...ux-foundation.org>, 
	Domenico Cerasuolo <cerasuolodomenico@...il.com>, linux-mm@...ck.org, linux-doc@...r.kernel.org, 
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/3] mm: zswap: fix global shrinker memcg iteration

On Mon, May 27, 2024 at 9:34 PM Takero Funaki <flintglass@...il.com> wrote:
>
> This patch fixes an issue where the zswap global shrinker stopped
> iterating through the memcg tree.

Did you observe this problem in practice?

>
> The problem was that `shrink_worker()` would stop iterating when a memcg
> was being offlined and restart from the tree root.  Now, it properly
> handles the offlining memcg and continues shrinking with the next memcg.
>
> Fixes: a65b0e7607cc ("zswap: make shrinking memcg-aware")
> Signed-off-by: Takero Funaki <flintglass@...il.com>
> ---
>  mm/zswap.c | 76 ++++++++++++++++++++++++++++++++++++++++--------------
>  1 file changed, 56 insertions(+), 20 deletions(-)
>
> diff --git a/mm/zswap.c b/mm/zswap.c
> index a50e2986cd2f..0b1052cee36c 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -775,12 +775,27 @@ void zswap_folio_swapin(struct folio *folio)
>         }
>  }
>
> +/*
> + * This function should be called when a memcg is being offlined.
> + *
> + * Since the global shrinker shrink_worker() may hold a reference
> + * of the memcg, we must check and release the reference in
> + * zswap_next_shrink.
> + *
> + * shrink_worker() must handle the case where this function releases
> + * the reference of memcg being shrunk.
> + */
>  void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg)
>  {
>         /* lock out zswap shrinker walking memcg tree */
>         spin_lock(&zswap_shrink_lock);
> -       if (zswap_next_shrink == memcg)
> -               zswap_next_shrink = mem_cgroup_iter(NULL, zswap_next_shrink, NULL);
> +
> +       if (READ_ONCE(zswap_next_shrink) == memcg) {
> +               /* put back reference and advance the cursor */
> +               memcg = mem_cgroup_iter(NULL, memcg, NULL);
> +               WRITE_ONCE(zswap_next_shrink, memcg);
> +       }

Hmm could you expand on how your version differs from what's existing?
What's the point of all these extra steps? The whole thing is done
under a big spin lock anyway.

> +
>         spin_unlock(&zswap_shrink_lock);
>  }
>
> @@ -1312,25 +1327,38 @@ static int shrink_memcg(struct mem_cgroup *memcg)
>
>  static void shrink_worker(struct work_struct *w)
>  {
> -       struct mem_cgroup *memcg;
> +       struct mem_cgroup *memcg = NULL;
> +       struct mem_cgroup *next_memcg;
>         int ret, failures = 0;
>         unsigned long thr;
>
>         /* Reclaim down to the accept threshold */
>         thr = zswap_accept_thr_pages();
>
> -       /* global reclaim will select cgroup in a round-robin fashion. */
> +       /* global reclaim will select cgroup in a round-robin fashion.
> +        *
> +        * We save iteration cursor memcg into zswap_next_shrink,
> +        * which can be modified by the offline memcg cleaner
> +        * zswap_memcg_offline_cleanup().
> +        */

I feel like the only difference between this loop and the old loop, is
that if we fail to get an online reference to memcg, we're trying from
the next one (given by mem_cgroup_iter()) rather than from the top
(NULL).

For instance, consider the first two steps:

1. First, we check if memcg is the same as zswap_next_shrink. if not,
then reset it to zswap_next_shrink.
2. Advance memcg, then store the result at zswap_next_shrink.

Under the big zswap_shrink_lock, this is the same as:

1. Advance zswap_next_shrink.
2. Look up zswap_next_shrink, then store it under the local memcg variable.

which is what we were previously doing.

The next step is shared - check for null memcg (which again, is the
same as zswap_next_shrink now), and attempt to get an online
reference.
The only difference is when we fail to get the online reference -
instead of starting from the top, we advance memcg again.

Can't we just drop the lock, then add a continue statement? That will
reacquire the lock, advance zswap_next_shrink, look up the result and
store it at memcg, which is what you're trying to achieve?

>         do {
>                 spin_lock(&zswap_shrink_lock);
> -               zswap_next_shrink = mem_cgroup_iter(NULL, zswap_next_shrink, NULL);
> -               memcg = zswap_next_shrink;
> +               next_memcg = READ_ONCE(zswap_next_shrink);
> +
> +               if (memcg != next_memcg) {
> +                       /*
> +                        * Ours was released by offlining.
> +                        * Use the saved memcg reference.
> +                        */
> +                       memcg = next_memcg;
> +               } else {
> +iternext:
> +                       /* advance cursor */
> +                       memcg = mem_cgroup_iter(NULL, memcg, NULL);
> +                       WRITE_ONCE(zswap_next_shrink, memcg);
> +               }
>
>                 /*
> -                * We need to retry if we have gone through a full round trip, or if we
> -                * got an offline memcg (or else we risk undoing the effect of the
> -                * zswap memcg offlining cleanup callback). This is not catastrophic
> -                * per se, but it will keep the now offlined memcg hostage for a while.
> -                *

Why are we removing this comment?

>                  * Note that if we got an online memcg, we will keep the extra
>                  * reference in case the original reference obtained by mem_cgroup_iter
>                  * is dropped by the zswap memcg offlining callback, ensuring that the
> @@ -1345,16 +1373,18 @@ static void shrink_worker(struct work_struct *w)
>                 }
>
>                 if (!mem_cgroup_tryget_online(memcg)) {
> -                       /* drop the reference from mem_cgroup_iter() */
> -                       mem_cgroup_iter_break(NULL, memcg);
> -                       zswap_next_shrink = NULL;
> -                       spin_unlock(&zswap_shrink_lock);
> -
> -                       if (++failures == MAX_RECLAIM_RETRIES)
> -                               break;
> -
> -                       goto resched;

I think we should still increment the failure counter, to guard
against long running/infinite loops.

> +                       /*
> +                        * It is an offline memcg which we cannot shrink
> +                        * until its pages are reparented.
> +                        * Put back the memcg reference before cleanup
> +                        * function reads it from zswap_next_shrink.
> +                        */
> +                       goto iternext;
>                 }
> +               /*
> +                * We got an extra memcg reference before unlocking.
> +                * The cleaner cannot free it using zswap_next_shrink.
> +                */
>                 spin_unlock(&zswap_shrink_lock);
>
>                 ret = shrink_memcg(memcg);
> @@ -1368,6 +1398,12 @@ static void shrink_worker(struct work_struct *w)
>  resched:
>                 cond_resched();
>         } while (zswap_total_pages() > thr);
> +
> +       /*
> +        * We can still hold the original memcg reference.
> +        * The reference is stored in zswap_next_shrink, and then reused
> +        * by the next shrink_worker().
> +        */
>  }
>
>  /*********************************
> --
> 2.43.0
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ