lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <gl54caplknnljganmswspw3cggoyjxe2n7szvnwhhiyl5y7ynh@tzl2yz7bw725>
Date: Wed, 7 Jan 2026 19:03:51 +0000
From: Yosry Ahmed <yosry.ahmed@...ux.dev>
To: Sergey Senozhatsky <senozhatsky@...omium.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>, 
	Nhat Pham <nphamcs@...il.com>, Minchan Kim <minchan@...nel.org>, 
	Johannes Weiner <hannes@...xchg.org>, Brian Geffon <bgeffon@...gle.com>, linux-kernel@...r.kernel.org, 
	linux-mm@...ck.org
Subject: Re: [PATCHv2 2/2] zsmalloc: simplify read begin/end logic

On Wed, Jan 07, 2026 at 02:21:45PM +0900, Sergey Senozhatsky wrote:
> From: Yosry Ahmed <yosry.ahmed@...ux.dev>

While I appreciate this, I think for all intents and purposes this patch
should be credited to you, it's different from the diff I said as it
applies on top of your change.

If you're feeling really generous, I think Suggested-by or
Co-developed-by + Signed-off-by is sufficient :)

> 
> When we switched from using class->size (for spans detection)
> to actual compressed object size, we had to compensate for
> the fact that class->size implicitly took inlined handle
> into consideration.  In fact, instead of adjusting the size
> of compressed object (adding handle offset for non-huge size
> classes), we can move some lines around and simplify the
> code: there are already paths in read_begin/end that compensate
> for inlined object handle offset.

I think the commit log is not clear in isolation.

How about something like this:

zs_obj_read_begin() currently maps or copies the the compressed object
with the prefix handle for !ZsHugePage case.  Make the logic clearer and
more officient by moving the offset of the object in the page after the
prefix handle instead, only copying the actual object and avoiding the
need to adjust the returned address to account for the prefix.

Adjust the logic to detect spanning objects in zs_obj_read_end()
accordingly, slightly simplifying it by avoiding the need to account for
the handle in both the offset and the object size.

> 
> Signed-off-by: Yosry Ahmed <yosry.ahmed@...ux.dev>
> ---
>  mm/zsmalloc.c | 9 ++-------
>  1 file changed, 2 insertions(+), 7 deletions(-)
> 
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 119c196a287a..cc3d9501ae21 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1088,7 +1088,7 @@ void *zs_obj_read_begin(struct zs_pool *pool, unsigned long handle,
>  	off = offset_in_page(class->size * obj_idx);
>  
>  	if (!ZsHugePage(zspage))
> -		mem_len += ZS_HANDLE_SIZE;
> +		off += ZS_HANDLE_SIZE;
>  
>  	if (off + mem_len <= PAGE_SIZE) {
>  		/* this object is contained entirely within a page */
> @@ -1110,9 +1110,6 @@ void *zs_obj_read_begin(struct zs_pool *pool, unsigned long handle,
>  				 0, sizes[1]);
>  	}
>  
> -	if (!ZsHugePage(zspage))
> -		addr += ZS_HANDLE_SIZE;
> -
>  	return addr;
>  }
>  EXPORT_SYMBOL_GPL(zs_obj_read_begin);
> @@ -1133,11 +1130,9 @@ void zs_obj_read_end(struct zs_pool *pool, unsigned long handle,
>  	off = offset_in_page(class->size * obj_idx);
>  
>  	if (!ZsHugePage(zspage))
> -		mem_len += ZS_HANDLE_SIZE;
> +		off += ZS_HANDLE_SIZE;
>  
>  	if (off + mem_len <= PAGE_SIZE) {
> -		if (!ZsHugePage(zspage))
> -			off += ZS_HANDLE_SIZE;
>  		handle_mem -= off;
>  		kunmap_local(handle_mem);
>  	}
> -- 
> 2.52.0.351.gbe84eed79e-goog
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ