[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zc0Silj_TKkUBRBF@google.com>
Date: Wed, 14 Feb 2024 19:20:42 +0000
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Chengming Zhou <zhouchengming@...edance.com>
Cc: Johannes Weiner <hannes@...xchg.org>, Andrew Morton <akpm@...ux-foundation.org>,
Nhat Pham <nphamcs@...il.com>, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/2] mm/zswap: global lru and shrinker shared by all zswap_pools
On Wed, Feb 14, 2024 at 08:54:37AM +0000, Chengming Zhou wrote:
> Dynamic zswap_pool creation may create/reuse to have multiple
> zswap_pools in a list, only the first will be current used.
>
> Each zswap_pool has its own lru and shrinker, which is not
> necessary and has its problem:
>
> 1. When memory has pressure, all shrinker of zswap_pools will
> try to shrink its own lru, there is no order between them.
>
> 2. When zswap limit hit, only the last zswap_pool's shrink_work
> will try to shrink its lru, which is inefficient.
I think the rationale here was to try and empty the old pool first so
that we can completely drop it. However, since we only support exclusive
loads now, the LRU ordering should be entirely decided by the order of
stores, so I think the oldest entries on the LRU will naturally be the
from the oldest pool, right?
Probably worth stating this.
>
> Anyway, having a global lru and shrinker shared by all zswap_pools
> is better and efficient.
>
> Signed-off-by: Chengming Zhou <zhouchengming@...edance.com>
LGTM with a few comments, with those:
Acked-by: Yosry Ahmed <yosryahmed@...gle.com>
> ---
> mm/zswap.c | 170 +++++++++++++++++++++++--------------------------------------
> 1 file changed, 65 insertions(+), 105 deletions(-)
>
> diff --git a/mm/zswap.c b/mm/zswap.c
> index 62fe307521c9..dbff67d7e1c7 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -176,14 +176,18 @@ struct zswap_pool {
> struct kref kref;
> struct list_head list;
> struct work_struct release_work;
> - struct work_struct shrink_work;
> struct hlist_node node;
> char tfm_name[CRYPTO_MAX_ALG_NAME];
> +};
> +
> +static struct {
> struct list_lru list_lru;
> - struct mem_cgroup *next_shrink;
> - struct shrinker *shrinker;
> atomic_t nr_stored;
> -};
> + struct shrinker *shrinker;
> + struct work_struct shrink_work;
> + struct mem_cgroup *next_shrink;
> + spinlock_t shrink_lock;
The lock is exclusively protecting next_shrink, right? Perhaps we should
rename it to next_shrink_lock or at least document this.
> +} zswap;
>
> /*
> * struct zswap_entry
> @@ -301,9 +305,6 @@ static void zswap_update_total_size(void)
> * pool functions
> **********************************/
>
> -static void zswap_alloc_shrinker(struct zswap_pool *pool);
> -static void shrink_worker(struct work_struct *w);
> -
> static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> {
> int i;
> @@ -353,30 +354,16 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
> if (ret)
> goto error;
>
> - zswap_alloc_shrinker(pool);
> - if (!pool->shrinker)
> - goto error;
> -
> - pr_debug("using %s compressor\n", pool->tfm_name);
nit: the next patch introduces a new failure case between this debug
print and zswap_pool_debug() below, so it will become possible again
that we get one and not the other. Not a big deal though.
> -
> /* being the current pool takes 1 ref; this func expects the
> * caller to always add the new pool as the current pool
> */
> kref_init(&pool->kref);
> INIT_LIST_HEAD(&pool->list);
> - if (list_lru_init_memcg(&pool->list_lru, pool->shrinker))
> - goto lru_fail;
> - shrinker_register(pool->shrinker);
> - INIT_WORK(&pool->shrink_work, shrink_worker);
> - atomic_set(&pool->nr_stored, 0);
>
> zswap_pool_debug("created", pool);
>
> return pool;
>
> -lru_fail:
> - list_lru_destroy(&pool->list_lru);
> - shrinker_free(pool->shrinker);
> error:
> if (pool->acomp_ctx)
> free_percpu(pool->acomp_ctx);
Powered by blists - more mailing lists