[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240510080827.GB950946@google.com>
Date: Fri, 10 May 2024 17:08:27 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: Herbert Xu <herbert@...dor.apana.org.au>
Cc: Sergey Senozhatsky <senozhatsky@...omium.org>,
Christoph Hellwig <hch@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Minchan Kim <minchan@...nel.org>, linux-kernel@...r.kernel.org,
linux-block@...r.kernel.org,
"David S. Miller" <davem@...emloft.net>,
linux-crypto@...r.kernel.org
Subject: Re: [PATCHv3 00/19] zram: convert to custom compression API and
allow algorithms tuning
On (24/05/10 15:40), Herbert Xu wrote:
> > But in general case, a typical crypto API usage
> >
> > tfm = crypto_alloc_comp(comp->name, 0, 0);
> >
> > should become much more complex. I'd say that, probably, developing
> > an entirely new sub-set of API would be simpler.
>
> We could easily add a setparams interface for acomp to support
> this. The form of parameters would be specific to each individual
> algorithm (but obviously all drivers for the same algorithm must
> use the same format).
For some algorithms params needs to be set before ctx is created.
For example zstd, crypto/zstd calls zstd_get_params(ZSTD_DEF_LEVEL, 0)
to estimate workspace size, which misses the opportunity to configure
it an way zram/zswap can benefit from, because those work with PAGE_SIZE
source buffer. So for zram zstd_get_params(ZSTD_DEF_LEVEL, PAGE_SIZE)
is much better (it saves 1.2MB per ctx, which is per-CPU in zram). Not
to mention that zstd_get_params(param->level, 0) is what we need at the
end.
And then drivers need to be re-implemented to support params. For
example, crypto/lz4 should call LZ4_compress_fast() instead of
LZ4_compress_default(), because fact() accepts compression level,
which is a tunable value.
Powered by blists - more mailing lists