lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <j6tlqyecmcf7anevhvptoh6lis6hzigencccjpq2j5uy2rax52@fytgstv37ynr>
Date: Wed, 21 Jan 2026 01:30:07 +0000
From: Yosry Ahmed <yosry.ahmed@...ux.dev>
To: Sergey Senozhatsky <senozhatsky@...omium.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>, 
	Minchan Kim <minchan@...nel.org>, Nhat Pham <nphamcs@...il.com>, 
	Johannes Weiner <hannes@...xchg.org>, Brian Geffon <bgeffon@...gle.com>, linux-kernel@...r.kernel.org, 
	linux-mm@...ck.org
Subject: Re: [RFC PATCH] zsmalloc: make common caches global

On Sat, Jan 17, 2026 at 11:24:01AM +0900, Sergey Senozhatsky wrote:
> On (26/01/16 20:49), Yosry Ahmed wrote:
> > On Fri, Jan 16, 2026 at 01:48:41PM +0900, Sergey Senozhatsky wrote:
> > > Currently, zsmalloc creates kmem_cache of handles and zspages
> > > for each pool, which may be suboptimal from the memory usage
> > > point of view (extra internal fragmentation per pool).  Systems
> > > that create multiple zsmalloc pools may benefit from shared
> > > common zsmalloc caches.
> > 
> > I had a similar patch internally when we had 32 zsmalloc pools with
> > zswap.
> 
> Oh, nice.
> 
> > You can calculate the savings by using /proc/slabinfo. The unused memory
> > is (num_objs-active_objs)*objsize. You can sum this across all caches
> > when you have multiple pools, and compare it to the unused memory with a
> > single cache.
> 
> Right.  Just curious, do you recall any numbers?

I have the exact numbers actually, from /proc/slabinfo while running a
zswap (internal) test:

*** Before:
	# name <active_objs> <num_objs> <objsize> ..
	zs_handle  35637  35760     16  ...
	zs_handle  35577  35760     16  ...
	zs_handle  35638  35760     16  ...
	zs_handle  35700  35760     16  ...
	zs_handle  35937  36240     16  ...
	zs_handle  35518  35760     16  ...
	zs_handle  35700  36000     16  ...
	zs_handle  35517  35760     16  ...
	zs_handle  35818  36000     16  ...
	zs_handle  35698  35760     16  ...
	zs_handle  35536  35760     16  ...
	zs_handle  35877  36240     16  ...
	zs_handle  35757  36000     16  ...
	zs_handle  35760  36000     16  ...
	zs_handle  35820  36000     16  ...
	zs_handle  35999  36000     16  ...
	zs_handle  35700  36000     16  ...
	zs_handle  35817  36000     16  ...
	zs_handle  35698  36000     16  ...
	zs_handle  35699  36000     16  ...
	zs_handle  35580  35760     16  ...
	zs_handle  35578  35760     16  ...
	zs_handle  35820  36000     16  ...
	zs_handle  35517  35760     16  ...
	zs_handle  35700  36000     16  ...
	zs_handle  35640  35760     16  ...
	zs_handle  35820  36000     16  ...
	zs_handle  35578  35760     16  ...
	zs_handle  35578  35760     16  ...
	zs_handle  35817  36000     16  ...
	zs_handle  35518  35760     16  ...
	zs_handle  35940  36240     16  ...
	zspage    991   1079     48   ...
	zspage    936    996     48   ...
	zspage    940    996     48   ...
	zspage   1050   1079     48   ...
	zspage    973   1079     48   ...
	zspage    942    996     48   ...
	zspage   1065   1162     48   ...
	zspage    885    996     48   ...
	zspage    887    913     48   ...
	zspage   1053   1079     48   ...
	zspage    983    996     48   ...
	zspage    966    996     48   ...
	zspage    970   1079     48   ...
	zspage    880    913     48   ...
	zspage   1006   1079     48   ...
	zspage    998   1079     48   ...
	zspage   1129   1162     48   ...
	zspage    903    913     48   ...
	zspage    833    996     48   ...
	zspage    861    913     48   ...
	zspage    764    913     48   ...
	zspage    898    913     48   ...
	zspage    973   1079     48   ...
	zspage    945    996     48   ...
	zspage    943   1079     48   ...
	zspage   1024   1079     48   ...
	zspage    820    913     48   ...
	zspage    702    830     48   ...
	zspage   1049   1079     48   ...
	zspage    990   1162     48   ...
	zspage    988   1079     48   ...
	zspage    932    996     48   ...

Unused memory = $(awk '{s += $4*($3-$2)} END {print s}') = 218416 bytes

*** After:
	# name <active_objs> <num_objs> <objsize> ..
	zs_handle 1054440 1054800     16  ...
	zspage   5720   5810     48   ...

Unused memory = (1054800-1054440)*16 + (5810-5720)*48 = 10080 bytes

That was about ~20 times reduction in waste when using 32 pools with
zswap. I suspect we wouldn't be using that many pools with zram.

> 
> [..]
> > Hmm instead of the repeated kmem_cache_destroy() calls, can we do sth
> > like this:
> 
> Sure.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ