lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <itnqbldahxd46zzwh5gq2iijcfrgyubp626bmr4jezpu43rkui@wal7n6ti6jq7>
Date: Tue, 6 May 2025 11:13:25 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: Johannes Weiner <hannes@...xchg.org>
Cc: Vitaly Wool <vitaly.wool@...sulko.se>, 
	Igor Belousov <igor.b@...dev.am>, linux-mm@...ck.org, akpm@...ux-foundation.org, 
	linux-kernel@...r.kernel.org, Nhat Pham <nphamcs@...il.com>, 
	Shakeel Butt <shakeel.butt@...ux.dev>, Yosry Ahmed <yosry.ahmed@...ux.dev>, 
	Minchan Kim <minchan@...nel.org>, Sergey Senozhatsky <senozhatsky@...omium.org>
Subject: Re: [PATCH] mm/zblock: use vmalloc for page allocations

On (25/05/05 10:08), Johannes Weiner wrote:
> I've been using zsmalloc with 16k pages just fine for ~a year,
> currently running it on 6.14.2-asahi. This machine sees a lot of
> memory pressure, too.
> 
> Could this be a more recent regression, maybe in the new obj_write()?

This looks like a recent regression.  In the old code we'd something like

	__zs_map_object(area, zpdescs, off, class->size)

which would use class->size for all memcpy() calculations:

       sizes[0] = PAGE_SIZE - off;
       sizes[1] = size - sizes[0];

       /* copy object to per-cpu buffer */
       memcpy_from_page(buf, zpdesc_page(zpdescs[0]), off, sizes[0]);
       memcpy_from_page(buf + sizes[0], zpdesc_page(zpdescs[1]), 0, sizes[1]);

So we sometimes would memcpy() more than the actual payload (object size
can be smaller than class->size), which would work because compressed
buffer is huge enough.  In the new code we use object size, only for
write() tho.

read_begin()/end() still use class->size, so  I think in some cases we
can "unnecessarily" go into
	"object spans two pages, memcpy() from both pages a local copy"
even if the actual object fits on one page.  We may also want to pass the
object size (which we know) to read_begin()/end(), this potentially can
save some memcpy() calls.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ