lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <SJ2PR11MB8472D347836B6CA3FEB0CDEEC9A3A@SJ2PR11MB8472.namprd11.prod.outlook.com>
Date: Tue, 9 Dec 2025 19:38:20 +0000
From: "Sridhar, Kanchana P" <kanchana.p.sridhar@...el.com>
To: Yosry Ahmed <yosry.ahmed@...ux.dev>
CC: Herbert Xu <herbert@...dor.apana.org.au>, SeongJae Park <sj@...nel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>, "hannes@...xchg.org"
	<hannes@...xchg.org>, "nphamcs@...il.com" <nphamcs@...il.com>,
	"chengming.zhou@...ux.dev" <chengming.zhou@...ux.dev>,
	"usamaarif642@...il.com" <usamaarif642@...il.com>, "ryan.roberts@....com"
	<ryan.roberts@....com>, "21cnbao@...il.com" <21cnbao@...il.com>,
	"ying.huang@...ux.alibaba.com" <ying.huang@...ux.alibaba.com>,
	"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
	"senozhatsky@...omium.org" <senozhatsky@...omium.org>, "kasong@...cent.com"
	<kasong@...cent.com>, "linux-crypto@...r.kernel.org"
	<linux-crypto@...r.kernel.org>, "davem@...emloft.net" <davem@...emloft.net>,
	"clabbe@...libre.com" <clabbe@...libre.com>, "ardb@...nel.org"
	<ardb@...nel.org>, "ebiggers@...gle.com" <ebiggers@...gle.com>,
	"surenb@...gle.com" <surenb@...gle.com>, "Accardi, Kristen C"
	<kristen.c.accardi@...el.com>, "Gomes, Vinicius" <vinicius.gomes@...el.com>,
	"Feghali, Wajdi K" <wajdi.k.feghali@...el.com>, "Gopal, Vinodh"
	<vinodh.gopal@...el.com>, "Sridhar, Kanchana P"
	<kanchana.p.sridhar@...el.com>
Subject: RE: [PATCH v13 22/22] mm: zswap: Batched zswap_compress() with
 compress batching of large folios.


> -----Original Message-----
> From: Yosry Ahmed <yosry.ahmed@...ux.dev>
> Sent: Tuesday, December 9, 2025 9:32 AM
> To: Sridhar, Kanchana P <kanchana.p.sridhar@...el.com>
> Cc: Herbert Xu <herbert@...dor.apana.org.au>; SeongJae Park
> <sj@...nel.org>; linux-kernel@...r.kernel.org; linux-mm@...ck.org;
> hannes@...xchg.org; nphamcs@...il.com; chengming.zhou@...ux.dev;
> usamaarif642@...il.com; ryan.roberts@....com; 21cnbao@...il.com;
> ying.huang@...ux.alibaba.com; akpm@...ux-foundation.org;
> senozhatsky@...omium.org; kasong@...cent.com; linux-
> crypto@...r.kernel.org; davem@...emloft.net; clabbe@...libre.com;
> ardb@...nel.org; ebiggers@...gle.com; surenb@...gle.com; Accardi,
> Kristen C <kristen.c.accardi@...el.com>; Gomes, Vinicius
> <vinicius.gomes@...el.com>; Feghali, Wajdi K <wajdi.k.feghali@...el.com>;
> Gopal, Vinodh <vinodh.gopal@...el.com>
> Subject: Re: [PATCH v13 22/22] mm: zswap: Batched zswap_compress() with
> compress batching of large folios.
> 
> On Tue, Dec 09, 2025 at 05:21:06PM +0000, Sridhar, Kanchana P wrote:
> >
> > > -----Original Message-----
> > > From: Yosry Ahmed <yosry.ahmed@...ux.dev>
> > > Sent: Tuesday, December 9, 2025 8:55 AM
> > > To: Herbert Xu <herbert@...dor.apana.org.au>
> > > Cc: Sridhar, Kanchana P <kanchana.p.sridhar@...el.com>; SeongJae Park
> > > <sj@...nel.org>; linux-kernel@...r.kernel.org; linux-mm@...ck.org;
> > > hannes@...xchg.org; nphamcs@...il.com;
> chengming.zhou@...ux.dev;
> > > usamaarif642@...il.com; ryan.roberts@....com; 21cnbao@...il.com;
> > > ying.huang@...ux.alibaba.com; akpm@...ux-foundation.org;
> > > senozhatsky@...omium.org; kasong@...cent.com; linux-
> > > crypto@...r.kernel.org; davem@...emloft.net; clabbe@...libre.com;
> > > ardb@...nel.org; ebiggers@...gle.com; surenb@...gle.com; Accardi,
> > > Kristen C <kristen.c.accardi@...el.com>; Gomes, Vinicius
> > > <vinicius.gomes@...el.com>; Feghali, Wajdi K
> <wajdi.k.feghali@...el.com>;
> > > Gopal, Vinodh <vinodh.gopal@...el.com>
> > > Subject: Re: [PATCH v13 22/22] mm: zswap: Batched zswap_compress()
> with
> > > compress batching of large folios.
> > >
> > > On Tue, Dec 09, 2025 at 10:32:20AM +0800, Herbert Xu wrote:
> > > > On Tue, Dec 09, 2025 at 01:15:02AM +0000, Yosry Ahmed wrote:
> > > > >
> > > > > Just to clarify, does this mean that zswap can pass a batch of (eight)
> > > > > pages to the acomp API, and get the results for the batch uniformly
> > > > > whether or not the underlying compressor supports batching?
> > > >
> > > > Correct.  In fact I'd like to remove the batch size exposure to zswap
> > > > altogether.  zswap should just pass along whatever maximum number of
> > > > pages that is convenient to itself.
> > >
> > > I think exposing the batch size is still useful as a hint for zswap. In
> > > the current series, zswap allocates as many per-CPU buffers as the
> > > compressor's batch size, so no extra buffers for non-batching
> > > compressors (including SW compressors).
> > >
> > > If we use the same batch size regardless, we'll have to always allocate
> > > 8 (or N) per-CPU buffers, for little to no benefit on non-batching
> > > compressors.
> > >
> > > So we still want the batch size on the zswap side, but we want the
> > > crypto API to be uniform whether or not the compressor supports
> > > batching.
> >
> > Thanks Yosry, you bring up a good point. I currently have the outer for
> > loop in zswap_compress() due to the above constraint. For non-batching
> > compressors, we allocate only one per-CPU buffer. Hence, we need to
> > call crypto_acomp_compress() and write the compressed data to the
> > zs_poll for each page in the batch. Wouldn't we need to allocate
> > 8 per-CPU buffers for non-batching compressors if we want zswap to
> > send a batch of 8 pages uniformly to the crypto API, so that
> > zswap_compress() can store the 8 pages in zs_pool after the crypto
> > API returns?
> 
> Ugh, yes.. I don't think we want to burn 7 extra pages per-CPU for SW
> compressors.
> 
> I think the cleanest way to handle this would be to:
> - Rename zswap_compress() to __zswap_compress(), and make it handle a
>   given batch size (which would be 1 or 8).
> - Introduce zswap_compress() as a wrapper that breaks down the folio
>   into batches and loops over them, passing them to __zswap_compress().
> - __zswap_compress() has a single unified path (e.g. for compressed
>   length and error handling), regardless of the batch size.
> 
> Can this be done with the current acomp API? I think all we really need
> is to be able to pass in a batch of size N (which can be 1), and read
> the error and compressed length in a single way. This is my main problem
> with the current patch.

Once Herbert gives us the crypto_acomp modification for non-batching
compressors to set the acomp_req->dst->length to the
compressed length/error value, I think the same could be accomplished
with the current patch, since I will be able to delete the "errp". IOW, I think
a simplification is possible without introducing __zswap_compress(). The
code will look seamless for non-batching and batching compressors, and the
distinction will be made apparent by the outer for loop that iterates over
the batch based on the pool->compr_batch_size in the current patch.

Alternately, we could introduce the __zswap_compress() that abstracts
one single iteration through the outer for loop: it compresses 1 or 8 pages
as a "batch". However, the distinction would still need to be made for
non-batching vs. batching compressors in the zswap_compress() wrapper:
both for sending the pool->compr_batch_size # of pages to
__zswap_compress() and for iterating over the single/multiple dst buffers
to write to zs_pool (the latter could be done within __zswap_compress(),
but the point remains: we would need to distinguish in one or the other).

It could be argued that keeping the seamless-ness in handling the calls to
crypto based on the pool->compr_batch_size and the logical distinctions
imposed by this in iterating over the output SG lists/buffers, would be
cleaner being self-contained in zswap_compress(). We already have a
zswap_store_pages() that processes the folio in batches. Maybe minimizing
the functions that do batch processing could be cleaner?

In any case, let me know which would be preferable.

Thanks,
Kanchana

> 
> In the future, if it's beneifical for some SW compressors to batch
> compressions, we can look into optimizations for the per-CPU buffers to
> avoid allocating 8 pages per-CPU (e.g. shared page pool), or make this
> opt-in for certain SW compressors that justify the cost.
> 
> >
> > Thanks,
> > Kanchana
> >
> > >
> > > >
> > > > Cheers,
> > > > --
> > > > Email: Herbert Xu <herbert@...dor.apana.org.au>
> > > > Home Page: http://gondor.apana.org.au/~herbert/
> > > > PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ