[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJD7tka=8YgUx2H=KSjN8ot0TDrh+bZCAgR6iRTfyUqNm7zYfg@mail.gmail.com>
Date: Mon, 22 Jul 2024 23:30:12 -0700
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Takero Funaki <flintglass@...il.com>
Cc: Johannes Weiner <hannes@...xchg.org>, Nhat Pham <nphamcs@...il.com>,
Chengming Zhou <chengming.zhou@...ux.dev>, Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 1/2] mm: zswap: fix global shrinker memcg iteration
On Fri, Jul 19, 2024 at 9:41 PM Takero Funaki <flintglass@...il.com> wrote:
>
> This patch fixes an issue where the zswap global shrinker stopped
> iterating through the memcg tree.
>
> The problem was that shrink_worker() would stop iterating when a memcg
> was being offlined and restart from the tree root. Now, it properly
> handles the offline memcg and continues shrinking with the next memcg.
>
> To avoid holding refcount of offline memcg encountered during the memcg
> tree walking, shrink_worker() must continue iterating to release the
> offline memcg to ensure the next memcg stored in the cursor is online.
>
> The offline memcg cleaner has also been changed to avoid the same issue.
> When the next memcg of the offlined memcg is also offline, the refcount
> stored in the iteration cursor was held until the next shrink_worker()
> run. The cleaner must release the offline memcg recursively.
>
> Fixes: a65b0e7607cc ("zswap: make shrinking memcg-aware")
> Signed-off-by: Takero Funaki <flintglass@...il.com>
> ---
> mm/zswap.c | 77 +++++++++++++++++++++++++++++++++++++++---------------
> 1 file changed, 56 insertions(+), 21 deletions(-)
>
> diff --git a/mm/zswap.c b/mm/zswap.c
> index a50e2986cd2f..6528668c9af3 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -775,12 +775,33 @@ void zswap_folio_swapin(struct folio *folio)
> }
> }
>
> +/*
> + * This function should be called when a memcg is being offlined.
> + *
> + * Since the global shrinker shrink_worker() may hold a reference
> + * of the memcg, we must check and release the reference in
> + * zswap_next_shrink.
> + *
> + * shrink_worker() must handle the case where this function releases
> + * the reference of memcg being shrunk.
> + */
> void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg)
> {
> /* lock out zswap shrinker walking memcg tree */
> spin_lock(&zswap_shrink_lock);
> - if (zswap_next_shrink == memcg)
> - zswap_next_shrink = mem_cgroup_iter(NULL, zswap_next_shrink, NULL);
> + if (zswap_next_shrink == memcg) {
> + do {
> + zswap_next_shrink = mem_cgroup_iter(NULL,
> + zswap_next_shrink, NULL);
> + } while (zswap_next_shrink &&
> + !mem_cgroup_online(zswap_next_shrink));
> + /*
> + * We verified the next memcg is online. Even if the next
> + * memcg is being offlined here, another cleaner must be
> + * waiting for our lock. We can leave the online memcg
> + * reference.
> + */
I think this comment and the similar one at the end of the loop in
shrink_worker() are very similar and not necessary. The large comment
above the loop in shrink_worker() already explains how that loop and
the offline memcg cleaner interact, and I think the locking follows
naturally from there. You can explicitly mention the locking there as
well if you think it helps, but I think these comments are a little
repetitive and do not add much value.
I don't feel strongly about it tho, if Nhat feels like they add value
then I am okay with that.
Otherwise, and with Nhat's other comments addressed:
Acked-by: Yosry Ahmed <yosryahmed@...gle.com>
> + }
> spin_unlock(&zswap_shrink_lock);
> }
>
> @@ -1319,18 +1340,38 @@ static void shrink_worker(struct work_struct *w)
> /* Reclaim down to the accept threshold */
> thr = zswap_accept_thr_pages();
>
> - /* global reclaim will select cgroup in a round-robin fashion. */
> + /* global reclaim will select cgroup in a round-robin fashion.
> + *
> + * We save iteration cursor memcg into zswap_next_shrink,
> + * which can be modified by the offline memcg cleaner
> + * zswap_memcg_offline_cleanup().
> + *
> + * Since the offline cleaner is called only once, we cannot leave an
> + * offline memcg reference in zswap_next_shrink.
> + * We can rely on the cleaner only if we get online memcg under lock.
> + *
> + * If we get an offline memcg, we cannot determine if the cleaner has
> + * already been called or will be called later. We must put back the
> + * reference before returning from this function. Otherwise, the
> + * offline memcg left in zswap_next_shrink will hold the reference
> + * until the next run of shrink_worker().
> + */
> do {
> spin_lock(&zswap_shrink_lock);
> - zswap_next_shrink = mem_cgroup_iter(NULL, zswap_next_shrink, NULL);
> - memcg = zswap_next_shrink;
>
> /*
> - * We need to retry if we have gone through a full round trip, or if we
> - * got an offline memcg (or else we risk undoing the effect of the
> - * zswap memcg offlining cleanup callback). This is not catastrophic
> - * per se, but it will keep the now offlined memcg hostage for a while.
> - *
> + * Start shrinking from the next memcg after zswap_next_shrink.
> + * When the offline cleaner has already advanced the cursor,
> + * advancing the cursor here overlooks one memcg, but this
> + * should be negligibly rare.
> + */
> + do {
> + zswap_next_shrink = mem_cgroup_iter(NULL,
> + zswap_next_shrink, NULL);
> + memcg = zswap_next_shrink;
> + } while (memcg && !mem_cgroup_tryget_online(memcg));
> +
> + /*
> * Note that if we got an online memcg, we will keep the extra
> * reference in case the original reference obtained by mem_cgroup_iter
> * is dropped by the zswap memcg offlining callback, ensuring that the
> @@ -1344,17 +1385,11 @@ static void shrink_worker(struct work_struct *w)
> goto resched;
> }
>
> - if (!mem_cgroup_tryget_online(memcg)) {
> - /* drop the reference from mem_cgroup_iter() */
> - mem_cgroup_iter_break(NULL, memcg);
> - zswap_next_shrink = NULL;
> - spin_unlock(&zswap_shrink_lock);
> -
> - if (++failures == MAX_RECLAIM_RETRIES)
> - break;
> -
> - goto resched;
> - }
> + /*
> + * We verified the memcg is online and got an extra memcg
> + * reference. Our memcg might be offlined concurrently but the
> + * respective offline cleaner must be waiting for our lock.
> + */
> spin_unlock(&zswap_shrink_lock);
>
> ret = shrink_memcg(memcg);
> --
> 2.43.0
>
Powered by blists - more mailing lists