[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YaqC1mkcyzO2WrI/@google.com>
Date: Fri, 3 Dec 2021 12:49:26 -0800
From: Dmitry Torokhov <dmitry.torokhov@...il.com>
To: Herbert Xu <herbert@...dor.apana.org.au>
Cc: Kees Cook <keescook@...omium.org>,
Geliang Tang <geliangtang@...il.com>,
Arnd Bergmann <arnd@...db.de>, Haren Myneni <haren@...ibm.com>,
Anton Vorontsov <anton@...msg.org>,
Colin Cross <ccross@...roid.com>,
Tony Luck <tony.luck@...el.com>,
Zhou Wang <wangzhou1@...ilicon.com>,
linux-crypto <linux-crypto@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/9] crypto: add zbufsize() interface
On Fri, Dec 03, 2021 at 01:28:21PM +1100, Herbert Xu wrote:
> On Thu, Dec 02, 2021 at 12:10:13AM -0800, Kees Cook wrote:
> >
> > I'd rather just have a simple API that hid all the async (or sync) details
> > and would work with whatever was the "best" implementation. Neither pstore
> > nor the module loader has anything else to do while decompression happens.
>
> Well that's exactly what the acomp interface is supposed to be.
> It supports any algorithm, whether sync or async. However, for
> obvious reasons this interface has to be async.
>
> > > Typically this would only make sense if you process a very small
> > > amount of data, but this seems counter-intuitive with compression
> > > (it does make sense with hashing where we often hash just 16 bytes).
> >
> > pstore works on usually a handful of small buffers. (One of the largest
> > I've seen is used by Chrome OS: 6 128K buffers.) Speed is not important
> > (done at most 6 times at boot, and 1 time on panic), and, in fact,
> > offload is probably a bad idea just to keep the machinery needed to
> > store a panic log as small as possible.
>
> In that case creating an scomp user interface is probably the best
> course of action.
>
> > Why can't crypto_comp_*() be refactored to wrap crypto_acomp_*() (and
> > crypto_scomp_*())? I can see so many other places that would benefit from
> > this. Here are just some of the places that appear to be hand-rolling
> > compression/decompression routines that might benefit from this kind of
> > code re-use and compression alg agnosticism:
>
> We cannot provide async hardware through a sync-only interface
> because that may lead to dead-lock. For your use-cases you should
> avoid using any async implementations.
>
> The scomp interface is meant to be pretty much identical to the
> legacy comp interface except that it supports integration with
> acomp.
>
> Because nobody has had a need for scomp we have not added an
> interface for it so it only exists as part of the low-level API.
> You're most welcome to expose it if you don't need the async
> support part of acomp.
I must be getting lost in terminology, and it feels to me that what is
discussed here is most likely of no interest to a lot of potential
users, especially ones that do compression/decompression. In majority of
cases they want to simply compress or decompress data, and they just
want to do it quickly and with minimal amount of memory consumed. They
do not particularly care if the task is being offloaded or executed on
the main CPU, either on separate thread or on the same thread, so the
discussion about scomp/acomp/etc is of no interest to them. From their
perspective they'd be totally fine with a wrapper that would do:
int decompress(...) {
prepare_request()
send_request()
wait_for_request()
}
and from their perspective this would be a synchronous API they are
happy with.
So from POV of such users what is actually missing is streaming mode of
compressing/decompressing where core would allow supplying additonal
data on demand and allow consuming output as it is being produced, and I
do not see anything like that in either scomp or acomp.
Thanks.
--
Dmitry
Powered by blists - more mailing lists