[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240720044127.508042-3-flintglass@gmail.com>
Date: Sat, 20 Jul 2024 04:41:25 +0000
From: Takero Funaki <flintglass@...il.com>
To: Johannes Weiner <hannes@...xchg.org>,
Yosry Ahmed <yosryahmed@...gle.com>,
Nhat Pham <nphamcs@...il.com>,
Chengming Zhou <chengming.zhou@...ux.dev>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Takero Funaki <flintglass@...il.com>,
linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: [PATCH v3 2/2] mm: zswap: fix global shrinker error handling logic
This patch fixes zswap global shrinker that did not shrink zpool as
expected.
The issue it addresses is that `shrink_worker()` did not distinguish
between unexpected errors and expected error codes that should be
skipped, such as when there is no stored page in a memcg. This led to
the shrinking process being aborted on the expected error codes.
The shrinker should ignore these cases and skip to the next memcg.
However, skipping all memcgs presents another problem. To address this,
this patch tracks progress while walking the memcg tree and checks for
progress once the tree walk is completed.
To handle the empty memcg case, the helper function `shrink_memcg()` is
modified to check if the memcg is empty and then return -ENOENT.
Fixes: a65b0e7607cc ("zswap: make shrinking memcg-aware")
Signed-off-by: Takero Funaki <flintglass@...il.com>
---
mm/zswap.c | 23 +++++++++++++++++------
1 file changed, 17 insertions(+), 6 deletions(-)
diff --git a/mm/zswap.c b/mm/zswap.c
index 6528668c9af3..053d5be81d9a 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -1310,10 +1310,10 @@ static struct shrinker *zswap_alloc_shrinker(void)
static int shrink_memcg(struct mem_cgroup *memcg)
{
- int nid, shrunk = 0;
+ int nid, shrunk = 0, scanned = 0;
if (!mem_cgroup_zswap_writeback_enabled(memcg))
- return -EINVAL;
+ return -ENOENT;
/*
* Skip zombies because their LRUs are reparented and we would be
@@ -1327,14 +1327,19 @@ static int shrink_memcg(struct mem_cgroup *memcg)
shrunk += list_lru_walk_one(&zswap_list_lru, nid, memcg,
&shrink_memcg_cb, NULL, &nr_to_walk);
+ scanned += 1 - nr_to_walk;
}
+
+ if (!scanned)
+ return -ENOENT;
+
return shrunk ? 0 : -EAGAIN;
}
static void shrink_worker(struct work_struct *w)
{
struct mem_cgroup *memcg;
- int ret, failures = 0;
+ int ret, failures = 0, progress = 0;
unsigned long thr;
/* Reclaim down to the accept threshold */
@@ -1379,9 +1384,12 @@ static void shrink_worker(struct work_struct *w)
*/
if (!memcg) {
spin_unlock(&zswap_shrink_lock);
- if (++failures == MAX_RECLAIM_RETRIES)
+
+ /* tree walk completed but no progress */
+ if (!progress && ++failures == MAX_RECLAIM_RETRIES)
break;
+ progress = 0;
goto resched;
}
@@ -1396,10 +1404,13 @@ static void shrink_worker(struct work_struct *w)
/* drop the extra reference */
mem_cgroup_put(memcg);
- if (ret == -EINVAL)
- break;
+ if (ret == -ENOENT)
+ continue;
+
if (ret && ++failures == MAX_RECLAIM_RETRIES)
break;
+
+ ++progress;
resched:
cond_resched();
} while (zswap_total_pages() > thr);
--
2.43.0
Powered by blists - more mailing lists