lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 27 Jun 2024 13:33:30 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Chengming Zhou <chengming.zhou@...ux.dev>
Cc: minchan@...nel.org, senozhatsky@...omium.org, linux-mm@...ck.org,
 linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] mm/zsmalloc: fix class per-fullness zspage counts

On Thu, 27 Jun 2024 15:59:58 +0800 Chengming Zhou <chengming.zhou@...ux.dev> wrote:

> We always use insert_zspage() and remove_zspage() to update zspage's
> fullness location, which will account correctly.
> 
> But this special async free path use "splice" instead of remove_zspage(),
> so the per-fullness zspage count for ZS_INUSE_RATIO_0 won't decrease.
> 
> Fix it by decreasing when iterate over the zspage free list.
>
> ...
>
> Signed-off-by: Chengming Zhou <chengming.zhou@...ux.dev>
> +++ b/mm/zsmalloc.c
> @@ -1883,6 +1883,7 @@ static void async_free_zspage(struct work_struct *work)
>  
>  		class = zspage_class(pool, zspage);
>  		spin_lock(&class->lock);
> +		class_stat_dec(class, ZS_INUSE_RATIO_0, 1);
>  		__free_zspage(pool, class, zspage);
>  		spin_unlock(&class->lock);
>  	}

What are the runtime effects of this bug?  Should we backport the fix
into earlier kernels?  And are we able to identify the appropriate
Fixes: target?

Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ