lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240328193149.GF7597@cmpxchg.org>
Date: Thu, 28 Mar 2024 15:31:49 -0400
From: Johannes Weiner <hannes@...xchg.org>
To: Yosry Ahmed <yosryahmed@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
	Nhat Pham <nphamcs@...il.com>,
	Chengming Zhou <chengming.zhou@...ux.dev>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 6/9] mm: zswap: drop support for non-zero same-filled
 pages handling

On Mon, Mar 25, 2024 at 11:50:14PM +0000, Yosry Ahmed wrote:
> The current same-filled pages handling supports pages filled with any
> repeated word-sized pattern. However, in practice, most of these should
> be zero pages anyway. Other patterns should be nearly as common.
> 
> Drop the support for non-zero same-filled pages, but keep the names of
> knobs exposed to userspace as "same_filled", which isn't entirely
> inaccurate.
> 
> This yields some nice code simplification and enables a following patch
> that eliminates the need to allocate struct zswap_entry for those pages
> completely.
> 
> There is also a very small performance improvement observed over 50 runs
> of kernel build test (kernbench) comparing the mean build time on a
> skylake machine when building the kernel in a cgroup v1 container with a
> 3G limit:
> 
> 		base		patched		% diff
> real		70.167		69.915		-0.359%
> user		2953.068	2956.147	+0.104%
> sys		2612.811	2594.718	-0.692%
> 
> This probably comes from more optimized operations like memchr_inv() and
> clear_highpage(). Note that the percentage of zero-filled pages during
> this test was only around 1.5% on average, and was not affected by this
> patch. Practical workloads could have a larger proportion of such pages
> (e.g. Johannes observed around 10% [1]), so the performance improvement
> should be larger.
> 
> [1]https://lore.kernel.org/linux-mm/20240320210716.GH294822@cmpxchg.org/
> 
> Signed-off-by: Yosry Ahmed <yosryahmed@...gle.com>

This is an interesting direction to pursue, but I actually thinkg it
doesn't go far enough. Either way, I think it needs more data.

1) How frequent are non-zero-same-filled pages? Difficult to
   generalize, but if you could gather some from your fleet, that
   would be useful. If you can devise a portable strategy, I'd also be
   more than happy to gather this on ours (although I think you have
   more widespread zswap use, whereas we have more disk swap.)

2) The fact that we're doing any of this pattern analysis in zswap at
   all strikes me as a bit misguided. Being efficient about repetitive
   patterns is squarely in the domain of a compression algorithm. Do
   we not trust e.g. zstd to handle this properly?

   I'm guessing this goes back to inefficient packing from something
   like zbud, which would waste half a page on one repeating byte.

   But zsmalloc can do 32 byte objects. It's also a batching slab
   allocator, where storing a series of small, same-sized objects is
   quite fast.

   Add to that the additional branches, the additional kmap, the extra
   scanning of every single page for patterns - all in the fast path
   of zswap, when we already know that the vast majority of incoming
   pages will need to be properly compressed anyway.

   Maybe it's time to get rid of the special handling entirely?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ