lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <8896482c-c447-45f1-a59c-998a13119ece@huawei.com>
Date: Mon, 25 Aug 2025 21:00:37 +0800
From: huangchenghai <huangchenghai2@...wei.com>
To: Herbert Xu <herbert@...dor.apana.org.au>
CC: <davem@...emloft.net>, <linux-kernel@...r.kernel.org>,
	<linux-crypto@...r.kernel.org>, <qianweili@...wei.com>,
	<linwenkai6@...ilicon.com>, <wangzhou1@...ilicon.com>, <taoqi10@...wei.com>
Subject: Re: [PATCH v2 0/3] crypto: hisilicon - add fallback function for
 hisilicon accelerater driver


在 2025/8/25 12:36, Herbert Xu 写道:
> On Mon, Aug 18, 2025 at 02:57:11PM +0800, Chenghai Huang wrote:
>> Support fallback for zip/sec2/hpre when device is busy.
>>
>> V1: https://lore.kernel.org/all/20250809070829.47204-1-huangchenghai2@huawei.com/
>> Updates:
>> - Remove unnecessary callback completions.
>> - Add CRYPTO_ALG_NEED_FALLBACK to hisi_zip's cra_flags.
>>
>> Chenghai Huang (1):
>>    crypto: hisilicon/zip - support fallback for zip
>>
>> Qi Tao (1):
>>    crypto: hisilicon/sec2 - support skcipher/aead fallback for hardware
>>      queue unavailable
>>
>> Weili Qian (1):
>>    crypto: hisilicon/hpre - support the hpre algorithm fallback
>>
>>   drivers/crypto/hisilicon/Kconfig            |   1 +
>>   drivers/crypto/hisilicon/hpre/hpre_crypto.c | 314 +++++++++++++++++---
>>   drivers/crypto/hisilicon/qm.c               |   4 +-
>>   drivers/crypto/hisilicon/sec2/sec_crypto.c  |  62 +++-
>>   drivers/crypto/hisilicon/zip/zip_crypto.c   |  52 +++-
>>   5 files changed, 360 insertions(+), 73 deletions(-)
> Are you mapping one hardware queue to a single tfm object?
Yes, in our current implementation, each hardware queue is mapped
to a dedicated tfm object.
>
> Hardware queues should be shared between tfm objects.
>
> Cheers,
Thank you for your suggestion.

We currently do not support sharing hardware queues between tfm
objects. Our rationale is as follows:
a) Queue multiplexing (allowing multiple tfms to share a queue)
theoretically improves resource utilization. However, hardware
resources are shared among all queues, and performance is also
shared. Once the hardware reaches its physical limits, all new
services can only queue up in the queue. Therefore, reuse will only
make the queue longer, not increase processing speed, and instead
increase business waiting latency. In cases of insufficient queues,
it is better to directly fallback to software processing.

In benchmark tests, only 16 queues or tfms are needed to achieve
full hardware bandwidth performance.

b) After a queue is initialized by a tfm, if a new tfm has a
different algorithm from the original queue, it cannot share the
queue. Queue reuse is limited by the type of tfm algorithm.

Thanks
Chenghai

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ