lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ofjlah554fgcc43e66djtysmnagd7gduqutueyauipovs35qb7@mw7keptebqau>
Date: Wed, 7 Jan 2026 01:56:20 +0000
From: Yosry Ahmed <yosry.ahmed@...ux.dev>
To: Sergey Senozhatsky <senozhatsky@...omium.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>, 
	Nhat Pham <nphamcs@...il.com>, Minchan Kim <minchan@...nel.org>, 
	Johannes Weiner <hannes@...xchg.org>, Brian Geffon <bgeffon@...gle.com>, linux-kernel@...r.kernel.org, 
	linux-mm@...ck.org
Subject: Re: [PATCH] zsmalloc: use actual object size to detect spans

On Wed, Jan 07, 2026 at 10:37:24AM +0900, Sergey Senozhatsky wrote:
> On (26/01/07 09:59), Sergey Senozhatsky wrote:
> > On (26/01/07 00:23), Yosry Ahmed wrote:
> > > Instead of modifying mem_len, can we modify 'off' like zs_obj_write()
> > > and zs_obj_read_end()? I think this can actually be done as a prequel to
> > > this patch. Arguably, it makes more sense as we avoid unnecessarily
> > > copying the handle (completely untested):
> > > 
> > > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> > > index 5bf832f9c05c..48c288da43b8 100644
> > > --- a/mm/zsmalloc.c
> > > +++ b/mm/zsmalloc.c
> > > @@ -1087,6 +1087,9 @@ void *zs_obj_read_begin(struct zs_pool *pool, unsigned long handle,
> > >         class = zspage_class(pool, zspage);
> > >         off = offset_in_page(class->size * obj_idx);
> > > 
> > > +       if (!ZsHugePage(zspage))
> > > +               off += ZS_HANDLE_SIZE;
> > > +
> > >         if (off + class->size <= PAGE_SIZE) {
> > >                 /* this object is contained entirely within a page */
> > >                 addr = kmap_local_zpdesc(zpdesc);
> > > @@ -1107,9 +1110,6 @@ void *zs_obj_read_begin(struct zs_pool *pool, unsigned long handle,
> > >                                  0, sizes[1]);
> > >         }
> > > 
> > > -       if (!ZsHugePage(zspage))
> > > -               addr += ZS_HANDLE_SIZE;
> > > -
> > >         return addr;
> > >  }
> > >  EXPORT_SYMBOL_GPL(zs_obj_read_begin);
> > > @@ -1129,9 +1129,10 @@ void zs_obj_read_end(struct zs_pool *pool, unsigned long handle,
> > >         class = zspage_class(pool, zspage);
> > >         off = offset_in_page(class->size * obj_idx);
> > > 
> > > +       if (!ZsHugePage(zspage))
> > > +               off += ZS_HANDLE_SIZE;
> > > +
> > >         if (off + class->size <= PAGE_SIZE) {
> > > -               if (!ZsHugePage(zspage))
> > > -                       off += ZS_HANDLE_SIZE;
> > >                 handle_mem -= off;
> > >                 kunmap_local(handle_mem);
> > >         }
> > > 
> > > ---
> > > Does this work?
> > 
> > Sounds interesting.  Let me try it out.
> 
> I recall us having exactly this idea when we first introduced
> zs_obj_{read,write}_end() functions, and I do recall that it
> did not work.  Somehow this panics in __memcpy+0xc/0x44.  Let
> me dig into it again.

Maybe because at this point we are trying to memcpy() class->size, which
already includes ZS_HANDLE_SIZE. So reading after increasing the offset
reads ZS_HANDLE_SIZE after class->size.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ