[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <99c059c6-6360-47d0-8513-7171d9f2e9af@hogyros.de>
Date: Fri, 20 Jun 2025 12:07:39 +0900
From: Simon Richter <Simon.Richter@...yros.de>
To: Eric Biggers <ebiggers@...nel.org>, T Pratham <t-pratham@...com>
Cc: Herbert Xu <herbert@...dor.apana.org.au>,
"David S. Miller" <davem@...emloft.net>, Rob Herring <robh@...nel.org>,
Krzysztof Kozlowski <krzk+dt@...nel.org>, Conor Dooley
<conor+dt@...nel.org>, linux-crypto@...r.kernel.org,
devicetree@...r.kernel.org, linux-kernel@...r.kernel.org,
Kamlesh Gurudasani <kamlesh@...com>, Vignesh Raghavendra <vigneshr@...com>,
Praneeth Bajjuri <praneeth@...com>, Manorit Chawdhry <m-chawdhry@...com>
Subject: Re: [PATCH v5 0/2] Add support for Texas Instruments DTHE V2 crypto
accelerator
Hi,
On 6/17/25 13:27, Eric Biggers wrote:
> Numbers, please. What is the specific, real use case in Linux where this
> patchset actually improves performance? Going off the CPU and back again just
> to en/decrypt some data is hugely expensive.
It would be cool to get some numbers from the IBM folks as well -- the
NX coprocessor can do AES and SHA, but it is not enabled in the Linux
kernel, only GZIP is (where I can definitely see a benefit, usually
somewhere between 3 to 9 GB/s depending on how hard it needs to look for
repetitions), so I'm wondering if that is an oversight, or deliberate.
I also wonder if for some hardware, we can get a speedup by offloading
and polling for completion instead of waiting for an interrupt. It feels
wrong, but the thread is blocked no matter what.
The other thing to ponder would be whether we can define a data size
threshold where the offloading overhead becomes small enough that it's
still worth it. That would also work for fscrypt, because with 4k
blocks, it would simply never choose the offload engine.
Simon
Powered by blists - more mailing lists