lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <uodv6dukliy7bnfprh4yoxjkrn77uqljarlg5pmlippxsxygzv@gthjss7yyrlf>
Date: Thu, 22 Jan 2026 12:28:56 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: Yosry Ahmed <yosry.ahmed@...ux.dev>
Cc: Sergey Senozhatsky <senozhatsky@...omium.org>, 
	Nhat Pham <nphamcs@...il.com>, Andrew Morton <akpm@...ux-foundation.org>, 
	Minchan Kim <minchan@...nel.org>, Johannes Weiner <hannes@...xchg.org>, 
	Brian Geffon <bgeffon@...gle.com>, linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [RFC PATCH] zsmalloc: make common caches global

On (26/01/21 23:58), Yosry Ahmed wrote:
> On Wed, Jan 21, 2026 at 12:41:39PM +0900, Sergey Senozhatsky wrote:
> > On (26/01/19 13:44), Nhat Pham wrote:
> > > On Thu, Jan 15, 2026 at 9:53 PM Sergey Senozhatsky
> > > <senozhatsky@...omium.org> wrote:
> > > >
> > > > On (26/01/16 13:48), Sergey Senozhatsky wrote:
> > > > > Currently, zsmalloc creates kmem_cache of handles and zspages
> > > > > for each pool, which may be suboptimal from the memory usage
> > > > > point of view (extra internal fragmentation per pool).  Systems
> > > > > that create multiple zsmalloc pools may benefit from shared
> > > > > common zsmalloc caches.
> > > >
> > > > This is step 1.
> > > >
> > > > Step 2 is to look into possibility of sharing zsmalloc pools.
> > > > E.g. if there are N zram devices in the system, do we really need
> > > > N zsmalloc pools?  Can we just share a single pool between them?
> > > 
> > > Ditto for zswap (although here, we almost always only have a single zswap pool).
> > 
> > COMPLETELY UNTESTED (current linux-next doesn't boot for me, hitting
> > an "Oops: stack guard page: 0000" early during boot).
> > 
> > So I'm thinking of something like below.  Basically have a Kconfig
> > option to turn zsmalloc into a singleton pool mode, transparently
> > for zsmalloc users.
> 
> Why do we need a config option? Is the main concern with a single pool
> lock contention? If yes, we can probably measure it by spawning many
> zram devices and stressing them at the same time.

That's a good question.  I haven't thought about just converting
zsmalloc to a singleton pool by default.  I don't think I'm
concerned with lock contention, the thing is we should have the
same upper boundary contention wise (there are only num_online_cpus()
tasks that can concurrently access any zsmalloc pool, be it a singleton
or not).  I certainly will try to measure once I have linux-next booting
again.

What was the reason why you allocated many zsmalloc pool in zswap?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ