[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AM0PR04MB717171C785D20ECC74B415638C150@AM0PR04MB7171.eurprd04.prod.outlook.com>
Date: Fri, 14 Feb 2020 01:25:50 +0000
From: Iuliana Prodan <iuliana.prodan@....com>
To: Herbert Xu <herbert@...dor.apana.org.au>
CC: Baolin Wang <baolin.wang@...aro.org>,
Ard Biesheuvel <ard.biesheuvel@...aro.org>,
Corentin Labbe <clabbe.montjoie@...il.com>,
Horia Geanta <horia.geanta@....com>,
Maxime Coquelin <mcoquelin.stm32@...il.com>,
Alexandre Torgue <alexandre.torgue@...com>,
Maxime Ripard <mripard@...nel.org>,
Aymen Sghaier <aymen.sghaier@....com>,
"David S. Miller" <davem@...emloft.net>,
Silvano Di Ninno <silvano.dininno@....com>,
Franck Lenormand <franck.lenormand@....com>,
"linux-crypto@...r.kernel.org" <linux-crypto@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
dl-linux-imx <linux-imx@....com>
Subject: Re: [PATCH v3 1/2] crypto: engine - support for parallel requests
On 2/13/2020 8:18 AM, Herbert Xu wrote:
> On Fri, Feb 07, 2020 at 02:36:13PM +0200, Iuliana Prodan wrote:
>>
>> +start_request:
>> + /* If hardware is busy, do not send any request */
>> + if (engine->can_enqueue_more) {
>> + if (!engine->can_enqueue_more(engine))
>> + goto out;
>
> Instead of a driver callback I'd rather the driver called into
> the engine telling it to stop/start, similar to how net drivers
> work.
>
Given your suggestion, I’m thinking of implementing do_one_request, in
the driver, to return -IN_PROGRESS if the hw can enqueue more and -EBUSY
if otherwise (solution 1). But, this implies to update all the drivers
that use crypto-engine (something I wouldn’t mind doing, but I don’t
have the hw to test it).
My current proposal keeps the backward compatibility of crypto-engine,
so no need to change the other drivers. If they want to use multiple
requests or batch requests, they can implement can_enqueue_more,
respective do_batch_requests (solution 2).
Please, let me know how do you want me to proceed? Solution 1 or
solution 2, or… maybe I’ve missed something?
Thanks,
Iulia
Powered by blists - more mailing lists