[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <yhecgcnt52hnsyf23p576mz2mlnffqrluikwzv6tdn3bnmzumc@thpyltdpxtjq>
Date: Wed, 10 Dec 2025 16:01:49 +0000
From: Yosry Ahmed <yosry.ahmed@...ux.dev>
To: "Sridhar, Kanchana P" <kanchana.p.sridhar@...el.com>
Cc: Herbert Xu <herbert@...dor.apana.org.au>,
SeongJae Park <sj@...nel.org>, "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>, "hannes@...xchg.org" <hannes@...xchg.org>,
"nphamcs@...il.com" <nphamcs@...il.com>, "chengming.zhou@...ux.dev" <chengming.zhou@...ux.dev>,
"usamaarif642@...il.com" <usamaarif642@...il.com>, "ryan.roberts@....com" <ryan.roberts@....com>,
"21cnbao@...il.com" <21cnbao@...il.com>, "ying.huang@...ux.alibaba.com" <ying.huang@...ux.alibaba.com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>, "senozhatsky@...omium.org" <senozhatsky@...omium.org>,
"kasong@...cent.com" <kasong@...cent.com>, "linux-crypto@...r.kernel.org" <linux-crypto@...r.kernel.org>,
"davem@...emloft.net" <davem@...emloft.net>, "clabbe@...libre.com" <clabbe@...libre.com>,
"ardb@...nel.org" <ardb@...nel.org>, "ebiggers@...gle.com" <ebiggers@...gle.com>,
"surenb@...gle.com" <surenb@...gle.com>, "Accardi, Kristen C" <kristen.c.accardi@...el.com>,
"Gomes, Vinicius" <vinicius.gomes@...el.com>, "Feghali, Wajdi K" <wajdi.k.feghali@...el.com>,
"Gopal, Vinodh" <vinodh.gopal@...el.com>
Subject: Re: [PATCH v13 22/22] mm: zswap: Batched zswap_compress() with
compress batching of large folios.
On Tue, Dec 09, 2025 at 07:38:20PM +0000, Sridhar, Kanchana P wrote:
>
> > -----Original Message-----
> > From: Yosry Ahmed <yosry.ahmed@...ux.dev>
> > Sent: Tuesday, December 9, 2025 9:32 AM
> > To: Sridhar, Kanchana P <kanchana.p.sridhar@...el.com>
> > Cc: Herbert Xu <herbert@...dor.apana.org.au>; SeongJae Park
> > <sj@...nel.org>; linux-kernel@...r.kernel.org; linux-mm@...ck.org;
> > hannes@...xchg.org; nphamcs@...il.com; chengming.zhou@...ux.dev;
> > usamaarif642@...il.com; ryan.roberts@....com; 21cnbao@...il.com;
> > ying.huang@...ux.alibaba.com; akpm@...ux-foundation.org;
> > senozhatsky@...omium.org; kasong@...cent.com; linux-
> > crypto@...r.kernel.org; davem@...emloft.net; clabbe@...libre.com;
> > ardb@...nel.org; ebiggers@...gle.com; surenb@...gle.com; Accardi,
> > Kristen C <kristen.c.accardi@...el.com>; Gomes, Vinicius
> > <vinicius.gomes@...el.com>; Feghali, Wajdi K <wajdi.k.feghali@...el.com>;
> > Gopal, Vinodh <vinodh.gopal@...el.com>
> > Subject: Re: [PATCH v13 22/22] mm: zswap: Batched zswap_compress() with
> > compress batching of large folios.
> >
> > On Tue, Dec 09, 2025 at 05:21:06PM +0000, Sridhar, Kanchana P wrote:
> > >
> > > > -----Original Message-----
> > > > From: Yosry Ahmed <yosry.ahmed@...ux.dev>
> > > > Sent: Tuesday, December 9, 2025 8:55 AM
> > > > To: Herbert Xu <herbert@...dor.apana.org.au>
> > > > Cc: Sridhar, Kanchana P <kanchana.p.sridhar@...el.com>; SeongJae Park
> > > > <sj@...nel.org>; linux-kernel@...r.kernel.org; linux-mm@...ck.org;
> > > > hannes@...xchg.org; nphamcs@...il.com;
> > chengming.zhou@...ux.dev;
> > > > usamaarif642@...il.com; ryan.roberts@....com; 21cnbao@...il.com;
> > > > ying.huang@...ux.alibaba.com; akpm@...ux-foundation.org;
> > > > senozhatsky@...omium.org; kasong@...cent.com; linux-
> > > > crypto@...r.kernel.org; davem@...emloft.net; clabbe@...libre.com;
> > > > ardb@...nel.org; ebiggers@...gle.com; surenb@...gle.com; Accardi,
> > > > Kristen C <kristen.c.accardi@...el.com>; Gomes, Vinicius
> > > > <vinicius.gomes@...el.com>; Feghali, Wajdi K
> > <wajdi.k.feghali@...el.com>;
> > > > Gopal, Vinodh <vinodh.gopal@...el.com>
> > > > Subject: Re: [PATCH v13 22/22] mm: zswap: Batched zswap_compress()
> > with
> > > > compress batching of large folios.
> > > >
> > > > On Tue, Dec 09, 2025 at 10:32:20AM +0800, Herbert Xu wrote:
> > > > > On Tue, Dec 09, 2025 at 01:15:02AM +0000, Yosry Ahmed wrote:
> > > > > >
> > > > > > Just to clarify, does this mean that zswap can pass a batch of (eight)
> > > > > > pages to the acomp API, and get the results for the batch uniformly
> > > > > > whether or not the underlying compressor supports batching?
> > > > >
> > > > > Correct. In fact I'd like to remove the batch size exposure to zswap
> > > > > altogether. zswap should just pass along whatever maximum number of
> > > > > pages that is convenient to itself.
> > > >
> > > > I think exposing the batch size is still useful as a hint for zswap. In
> > > > the current series, zswap allocates as many per-CPU buffers as the
> > > > compressor's batch size, so no extra buffers for non-batching
> > > > compressors (including SW compressors).
> > > >
> > > > If we use the same batch size regardless, we'll have to always allocate
> > > > 8 (or N) per-CPU buffers, for little to no benefit on non-batching
> > > > compressors.
> > > >
> > > > So we still want the batch size on the zswap side, but we want the
> > > > crypto API to be uniform whether or not the compressor supports
> > > > batching.
> > >
> > > Thanks Yosry, you bring up a good point. I currently have the outer for
> > > loop in zswap_compress() due to the above constraint. For non-batching
> > > compressors, we allocate only one per-CPU buffer. Hence, we need to
> > > call crypto_acomp_compress() and write the compressed data to the
> > > zs_poll for each page in the batch. Wouldn't we need to allocate
> > > 8 per-CPU buffers for non-batching compressors if we want zswap to
> > > send a batch of 8 pages uniformly to the crypto API, so that
> > > zswap_compress() can store the 8 pages in zs_pool after the crypto
> > > API returns?
> >
> > Ugh, yes.. I don't think we want to burn 7 extra pages per-CPU for SW
> > compressors.
> >
> > I think the cleanest way to handle this would be to:
> > - Rename zswap_compress() to __zswap_compress(), and make it handle a
> > given batch size (which would be 1 or 8).
> > - Introduce zswap_compress() as a wrapper that breaks down the folio
> > into batches and loops over them, passing them to __zswap_compress().
> > - __zswap_compress() has a single unified path (e.g. for compressed
> > length and error handling), regardless of the batch size.
> >
> > Can this be done with the current acomp API? I think all we really need
> > is to be able to pass in a batch of size N (which can be 1), and read
> > the error and compressed length in a single way. This is my main problem
> > with the current patch.
>
> Once Herbert gives us the crypto_acomp modification for non-batching
> compressors to set the acomp_req->dst->length to the
> compressed length/error value, I think the same could be accomplished
> with the current patch, since I will be able to delete the "errp". IOW, I think
> a simplification is possible without introducing __zswap_compress(). The
> code will look seamless for non-batching and batching compressors, and the
> distinction will be made apparent by the outer for loop that iterates over
> the batch based on the pool->compr_batch_size in the current patch.
I think moving the outer loop outside to a wrapper could make the
function digestable without nested loops.
>
> Alternately, we could introduce the __zswap_compress() that abstracts
> one single iteration through the outer for loop: it compresses 1 or 8 pages
> as a "batch". However, the distinction would still need to be made for
> non-batching vs. batching compressors in the zswap_compress() wrapper:
> both for sending the pool->compr_batch_size # of pages to
> __zswap_compress() and for iterating over the single/multiple dst buffers
> to write to zs_pool (the latter could be done within __zswap_compress(),
> but the point remains: we would need to distinguish in one or the other).
Not sure what you mean by the latter. IIUC, for all compressors
__zswap_compress() would iterate over the dst buffers and write to
zs_pool, whether the number of dst buffers is 1 or 8. So there wouldn't
be any different handling in __zswap_compress(), right?
That's my whole motivation for introducing a wrapper that abstracts away
the batching size.
>
> It could be argued that keeping the seamless-ness in handling the calls to
> crypto based on the pool->compr_batch_size and the logical distinctions
> imposed by this in iterating over the output SG lists/buffers, would be
> cleaner being self-contained in zswap_compress(). We already have a
> zswap_store_pages() that processes the folio in batches. Maybe minimizing
> the functions that do batch processing could be cleaner?
Yeah it's not great that we'll end up with zswap_store_pages() splitting
the folio into batches of 8, then zswap_compress() further splitting
them into compression batches -- but we'll have that anyway. Whether
it's inside zswap_compress() or a wrapper doesn't make things much
different imo.
Also, splitting the folio differently at different levels make semantic
sense. zswap_store_pages() splits it into batches of 8, because this is
what zswap handles (mainly to avoid dynamically allocating things like
entries). zswap_compress() will split it further if the underlying
compressor prefers that, to avoid allocating many buffer pages. So I
think it kinda makes sense.
In the future, we can revisit the split in zswap_compress() if we have a
good case for batching compression for SW (e.g. compress every 8 pages
as a single unit), or if we can optimize the per-CPU buffers somehow.
>
> In any case, let me know which would be preferable.
>
> Thanks,
> Kanchana
>
> >
> > In the future, if it's beneifical for some SW compressors to batch
> > compressions, we can look into optimizations for the per-CPU buffers to
> > avoid allocating 8 pages per-CPU (e.g. shared page pool), or make this
> > opt-in for certain SW compressors that justify the cost.
> >
> > >
> > > Thanks,
> > > Kanchana
> > >
> > > >
> > > > >
> > > > > Cheers,
> > > > > --
> > > > > Email: Herbert Xu <herbert@...dor.apana.org.au>
> > > > > Home Page: http://gondor.apana.org.au/~herbert/
> > > > > PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
Powered by blists - more mailing lists