[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240627133330.7f8a82078725228585dbf2d3@linux-foundation.org>
Date: Thu, 27 Jun 2024 13:33:30 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Chengming Zhou <chengming.zhou@...ux.dev>
Cc: minchan@...nel.org, senozhatsky@...omium.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] mm/zsmalloc: fix class per-fullness zspage counts
On Thu, 27 Jun 2024 15:59:58 +0800 Chengming Zhou <chengming.zhou@...ux.dev> wrote:
> We always use insert_zspage() and remove_zspage() to update zspage's
> fullness location, which will account correctly.
>
> But this special async free path use "splice" instead of remove_zspage(),
> so the per-fullness zspage count for ZS_INUSE_RATIO_0 won't decrease.
>
> Fix it by decreasing when iterate over the zspage free list.
>
> ...
>
> Signed-off-by: Chengming Zhou <chengming.zhou@...ux.dev>
> +++ b/mm/zsmalloc.c
> @@ -1883,6 +1883,7 @@ static void async_free_zspage(struct work_struct *work)
>
> class = zspage_class(pool, zspage);
> spin_lock(&class->lock);
> + class_stat_dec(class, ZS_INUSE_RATIO_0, 1);
> __free_zspage(pool, class, zspage);
> spin_unlock(&class->lock);
> }
What are the runtime effects of this bug? Should we backport the fix
into earlier kernels? And are we able to identify the appropriate
Fixes: target?
Thanks.
Powered by blists - more mailing lists