[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200205191128.GA32606@Red>
Date: Wed, 5 Feb 2020 20:11:28 +0100
From: Corentin Labbe <clabbe.montjoie@...il.com>
To: Iuliana Prodan <iuliana.prodan@....com>
Cc: Herbert Xu <herbert@...dor.apana.org.au>,
Baolin Wang <baolin.wang@...aro.org>,
Ard Biesheuvel <ard.biesheuvel@...aro.org>,
Horia Geanta <horia.geanta@....com>,
Maxime Coquelin <mcoquelin.stm32@...il.com>,
Alexandre Torgue <alexandre.torgue@...com>,
Maxime Ripard <mripard@...nel.org>,
Aymen Sghaier <aymen.sghaier@....com>,
"David S. Miller" <davem@...emloft.net>,
Silvano Di Ninno <silvano.dininno@....com>,
Franck Lenormand <franck.lenormand@....com>,
linux-crypto@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-imx <linux-imx@....com>
Subject: Re: [PATCH v2 1/2] crypto: engine - support for parallel requests
On Tue, Feb 04, 2020 at 02:34:19PM +0200, Iuliana Prodan wrote:
> Added support for executing multiple requests, in parallel,
> for crypto engine.
> A new callback is added, can_enqueue_more, which asks the
> driver if the hardware has free space, to enqueue a new request.
> The new crypto_engine_alloc_init_and_set function, initialize
> crypto-engine, sets the maximum size for crypto-engine software
> queue (not hardcoded anymore) and the can_enqueue_more callback.
> On crypto_pump_requests, if can_enqueue_more callback returns true,
> a new request is send to hardware, until there is no space and the
> callback returns false.
>
> Signed-off-by: Iuliana Prodan <iuliana.prodan@....com>
> ---
> crypto/crypto_engine.c | 106 ++++++++++++++++++++++++++++++------------------
> include/crypto/engine.h | 10 +++--
> 2 files changed, 72 insertions(+), 44 deletions(-)
>
> diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
> index eb029ff..aba934f 100644
> --- a/crypto/crypto_engine.c
> +++ b/crypto/crypto_engine.c
> @@ -22,32 +22,18 @@
> * @err: error number
> */
> static void crypto_finalize_request(struct crypto_engine *engine,
> - struct crypto_async_request *req, int err)
> + struct crypto_async_request *req, int err)
> {
> - unsigned long flags;
> - bool finalize_cur_req = false;
> int ret;
> struct crypto_engine_ctx *enginectx;
>
> - spin_lock_irqsave(&engine->queue_lock, flags);
> - if (engine->cur_req == req)
> - finalize_cur_req = true;
> - spin_unlock_irqrestore(&engine->queue_lock, flags);
> -
> - if (finalize_cur_req) {
> - enginectx = crypto_tfm_ctx(req->tfm);
> - if (engine->cur_req_prepared &&
> - enginectx->op.unprepare_request) {
> - ret = enginectx->op.unprepare_request(engine, req);
> - if (ret)
> - dev_err(engine->dev, "failed to unprepare request\n");
> - }
> - spin_lock_irqsave(&engine->queue_lock, flags);
> - engine->cur_req = NULL;
> - engine->cur_req_prepared = false;
> - spin_unlock_irqrestore(&engine->queue_lock, flags);
> + enginectx = crypto_tfm_ctx(req->tfm);
> + if (enginectx->op.prepare_request &&
> + enginectx->op.unprepare_request) {
> + ret = enginectx->op.unprepare_request(engine, req);
> + if (ret)
> + dev_err(engine->dev, "failed to unprepare request\n");
> }
> -
> req->complete(req, err);
>
> kthread_queue_work(engine->kworker, &engine->pump_requests);
> @@ -73,10 +59,6 @@ static void crypto_pump_requests(struct crypto_engine *engine,
>
> spin_lock_irqsave(&engine->queue_lock, flags);
>
> - /* Make sure we are not already running a request */
> - if (engine->cur_req)
> - goto out;
> -
Hello
Your patch has the same problem than mine reported by Horia.
If a queue has more than one request, a first crypto_pump_requests() will send a request and for drivers which do not block on do_one_request() crypto_pump_requests() will end.
Then another crypto_pump_requests() will fire sending a second request while the driver does not support that.
So we need to replace engine->cur_req by another locking mechanism.
Perhaps the cleaner is to add a "request count" (increased when do_one_request, decreased in crypto_finalize_request)
I know that the early version have that and it was removed, but I do not see any better way.
Regards
Powered by blists - more mailing lists