lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <7u6k3kfvifkfcwfzxzgbwymdhjhcwmb2z6o4ju2kddwlfwtsaq@xapk55ehdonc>
Date: Thu, 22 Jan 2026 12:55:12 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: Yosry Ahmed <yosry.ahmed@...ux.dev>
Cc: Sergey Senozhatsky <senozhatsky@...omium.org>, 
	Nhat Pham <nphamcs@...il.com>, Andrew Morton <akpm@...ux-foundation.org>, 
	Minchan Kim <minchan@...nel.org>, Johannes Weiner <hannes@...xchg.org>, 
	Brian Geffon <bgeffon@...gle.com>, linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [RFC PATCH] zsmalloc: make common caches global

On (26/01/22 03:39), Yosry Ahmed wrote:
[..]
> > That's a good question.  I haven't thought about just converting
> > zsmalloc to a singleton pool by default.  I don't think I'm
> > concerned with lock contention, the thing is we should have the
> > same upper boundary contention wise (there are only num_online_cpus()
> > tasks that can concurrently access any zsmalloc pool, be it a singleton
> > or not).  I certainly will try to measure once I have linux-next booting
> > again.
> > 
> > What was the reason why you allocated many zsmalloc pool in zswap?
> 
> IIRC it was actually lock contention, specifically the pool spinlock.
> When the change was made to per-class spinlocks, we dropped the multiple
> pools:
> http://lore.kernel.org/linux-mm/20240617-zsmalloc-lock-mm-everything-v1-0-5e5081ea11b3@linux.dev/.
> 
> So having multiple pools does mitigate lock contention in some cases.
> Even though the upper boundary might be the same, the actual number of
> CPUs contending on the same lock would go down in practice.
> 
> While looking for this, I actually found something more interesting. I
> did propose more-or-less the same exact patch back when zswap used
> multiple pools:
> https://lore.kernel.org/all/20240604175340.218175-1-yosryahmed@google.com/.
> 
> Seems like Minchan had some concerns back then. I wonder if those still
> apply.

Interesting.  Lifecycles are completely random, I don't see how we
can make any assumptions about them and how we can rely on them to
avoid/control fragmentation.  I think we should have global caches.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ