lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAJD7tkaScz+SbB90Q1d5mMD70UfM2a-J2zhXDT9sePR7Qap45Q@mail.gmail.com>
Date: Fri, 2 Aug 2024 21:14:17 -0700
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Takero Funaki <flintglass@...il.com>
Cc: Johannes Weiner <hannes@...xchg.org>, Nhat Pham <nphamcs@...il.com>, 
	Chengming Zhou <chengming.zhou@...ux.dev>, Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org, 
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 1/2] mm: zswap: fix global shrinker memcg iteration

On Tue, Jul 30, 2024 at 5:49 PM Takero Funaki <flintglass@...il.com> wrote:
>
> This patch fixes an issue where the zswap global shrinker stopped
> iterating through the memcg tree.
>
> The problem was that shrink_worker() would restart iterating memcg tree
> from the tree root, considering an offline memcg as a failure, and abort
> shrinking after encountering the same offline memcg 16 times even if
> there is only one offline memcg. After this change, an offline memcg in
> the tree is no longer considered a failure. This allows the shrinker to
> continue shrinking the other online memcgs regardless of whether an
> offline memcg exists, gives higher zswap writeback activity.
>
> To avoid holding refcount of offline memcg encountered during the memcg
> tree walking, shrink_worker() must continue iterating to release the
> offline memcg to ensure the next memcg stored in the cursor is online.
>
> The offline memcg cleaner has also been changed to avoid the same issue.
> When the next memcg of the offlined memcg is also offline, the refcount
> stored in the iteration cursor was held until the next shrink_worker()
> run. The cleaner must release the offline memcg recursively.
>
> Fixes: a65b0e7607cc ("zswap: make shrinking memcg-aware")
> Acked-by: Yosry Ahmed <yosryahmed@...gle.com>
> Reviewed-by: Chengming Zhou <chengming.zhou@...ux.dev>
> Reviewed-by: Nhat Pham <nphamcs@...il.com>
> Signed-off-by: Takero Funaki <flintglass@...il.com>
> ---
>  mm/zswap.c | 68 +++++++++++++++++++++++++++++++++++-------------------
>  1 file changed, 44 insertions(+), 24 deletions(-)
>
> diff --git a/mm/zswap.c b/mm/zswap.c
> index adeaf9c97fde..3c16a1192252 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -765,12 +765,25 @@ void zswap_folio_swapin(struct folio *folio)
>         }
>  }
>
> +/*
> + * This function should be called when a memcg is being offlined.
> + *
> + * Since the global shrinker shrink_worker() may hold a reference
> + * of the memcg, we must check and release the reference in
> + * zswap_next_shrink.
> + *
> + * shrink_worker() must handle the case where this function releases
> + * the reference of memcg being shrunk.
> + */
>  void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg)
>  {
>         /* lock out zswap shrinker walking memcg tree */
>         spin_lock(&zswap_shrink_lock);
> -       if (zswap_next_shrink == memcg)
> -               zswap_next_shrink = mem_cgroup_iter(NULL, zswap_next_shrink, NULL);
> +       if (zswap_next_shrink == memcg) {
> +               do {
> +                       zswap_next_shrink = mem_cgroup_iter(NULL, zswap_next_shrink, NULL);
> +               } while (zswap_next_shrink && !mem_cgroup_online(zswap_next_shrink));
> +       }
>         spin_unlock(&zswap_shrink_lock);
>  }
>
> @@ -1304,43 +1317,50 @@ static void shrink_worker(struct work_struct *w)
>         /* Reclaim down to the accept threshold */
>         thr = zswap_accept_thr_pages();
>
> -       /* global reclaim will select cgroup in a round-robin fashion. */
> +       /*
> +        * Global reclaim will select cgroup in a round-robin fashion.
> +        *
> +        * We save iteration cursor memcg into zswap_next_shrink,
> +        * which can be modified by the offline memcg cleaner
> +        * zswap_memcg_offline_cleanup().
> +        *
> +        * Since the offline cleaner is called only once, we cannot leave an
> +        * offline memcg reference in zswap_next_shrink.
> +        * We can rely on the cleaner only if we get online memcg under lock.
> +        *
> +        * If we get an offline memcg, we cannot determine if the cleaner has
> +        * already been called or will be called later. We must put back the
> +        * reference before returning from this function. Otherwise, the
> +        * offline memcg left in zswap_next_shrink will hold the reference
> +        * until the next run of shrink_worker().
> +        */
>         do {
>                 spin_lock(&zswap_shrink_lock);
> -               zswap_next_shrink = mem_cgroup_iter(NULL, zswap_next_shrink, NULL);
> -               memcg = zswap_next_shrink;
>
>                 /*
> -                * We need to retry if we have gone through a full round trip, or if we
> -                * got an offline memcg (or else we risk undoing the effect of the
> -                * zswap memcg offlining cleanup callback). This is not catastrophic
> -                * per se, but it will keep the now offlined memcg hostage for a while.
> -                *
> +                * Start shrinking from the next memcg after zswap_next_shrink.
> +                * When the offline cleaner has already advanced the cursor,
> +                * advancing the cursor here overlooks one memcg, but this
> +                * should be negligibly rare.
> +                */
> +               do {
> +                       memcg = mem_cgroup_iter(NULL, zswap_next_shrink, NULL);
> +                       zswap_next_shrink = memcg;
> +               } while (memcg && !mem_cgroup_tryget_online(memcg));

I took a look at refactoring the loop to a helper, but it's probably
not going to be any clearer because this loop has a tryget, and the
loop in zswap_memcg_offline_cleanup() only has an online check. Using
a tryget in the offline cleanup version would be wasteful and we'll
put the ref right away.

Instead, I think we should just move the spin_lock/unlock() closer to
the loop to make the critical section more obvious, and unify the
comments above and below into a single block.

Andrew, could you please fold in the following diff (unless Takero objects):

diff --git a/mm/zswap.c b/mm/zswap.c
index babf0abbcc765..df620eacd1d11 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -1364,24 +1364,22 @@ static void shrink_worker(struct work_struct *w)
         * until the next run of shrink_worker().
         */
        do {
-               spin_lock(&zswap_shrink_lock);
-
                /*
                 * Start shrinking from the next memcg after zswap_next_shrink.
                 * When the offline cleaner has already advanced the cursor,
                 * advancing the cursor here overlooks one memcg, but this
                 * should be negligibly rare.
+                *
+                * If we get an online memcg, keep the extra reference in case
+                * the original one obtained by mem_cgroup_iter() is dropped by
+                * zswap_memcg_offline_cleanup() while we are shrinking the
+                * memcg.
                 */
+               spin_lock(&zswap_shrink_lock);
                do {
                        memcg = mem_cgroup_iter(NULL, zswap_next_shrink, NULL);
                        zswap_next_shrink = memcg;
                } while (memcg && !mem_cgroup_tryget_online(memcg));
-               /*
-                * Note that if we got an online memcg, we will keep the extra
-                * reference in case the original reference obtained
by mem_cgroup_iter
-                * is dropped by the zswap memcg offlining callback,
ensuring that the
-                * memcg is not killed when we are reclaiming.
-                */
                spin_unlock(&zswap_shrink_lock);

                if (!memcg) {

> +               /*
>                  * Note that if we got an online memcg, we will keep the extra
>                  * reference in case the original reference obtained by mem_cgroup_iter
>                  * is dropped by the zswap memcg offlining callback, ensuring that the
>                  * memcg is not killed when we are reclaiming.
>                  */
> -               if (!memcg) {
> -                       spin_unlock(&zswap_shrink_lock);
> -                       if (++failures == MAX_RECLAIM_RETRIES)
> -                               break;
> -
> -                       goto resched;
> -               }
> -
> -               if (!mem_cgroup_tryget_online(memcg)) {
> -                       /* drop the reference from mem_cgroup_iter() */
> -                       mem_cgroup_iter_break(NULL, memcg);
> -                       zswap_next_shrink = NULL;
> -                       spin_unlock(&zswap_shrink_lock);
> +               spin_unlock(&zswap_shrink_lock);
>
> +               if (!memcg) {
>                         if (++failures == MAX_RECLAIM_RETRIES)
>                                 break;
>
>                         goto resched;
>                 }
> -               spin_unlock(&zswap_shrink_lock);
>
>                 ret = shrink_memcg(memcg);
>                 /* drop the extra reference */
> --
> 2.43.0
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ