[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <307953c3-6f41-2e2c-eba5-5dcd2fb5e1b4@nokia.com>
Date: Thu, 12 Apr 2018 17:08:40 +0200
From: Alexander Sverdlin <alexander.sverdlin@...ia.com>
To: Ioan Nicu <ioan.nicu.ext@...ia.com>,
Alexandre Bounine <alex.bou9@...il.com>,
Barry Wood <barry.wood@....com>,
Matt Porter <mporter@...nel.crashing.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Christophe JAILLET <christophe.jaillet@...adoo.fr>,
Al Viro <viro@...iv.linux.org.uk>,
Logan Gunthorpe <logang@...tatee.com>,
Chris Wilson <chris@...is-wilson.co.uk>,
Tvrtko Ursulin <tvrtko.ursulin@...el.com>,
Frank Kunz <frank.kunz@...ia.com>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] rapidio: fix rio_dma_transfer error handling
On 12/04/18 17:06, Ioan Nicu wrote:
> Some of the mport_dma_req structure members were initialized late
> inside the do_dma_request() function, just before submitting the
> request to the dma engine. But we have some error branches before
> that. In case of such an error, the code would return on the error
> path and trigger the calling of dma_req_free() with a req structure
> which is not completely initialized. This causes a NULL pointer
> dereference in dma_req_free().
>
> This patch fixes these error branches by making sure that all
> necessary mport_dma_req structure members are initialized in
> rio_dma_transfer() immediately after the request structure gets
> allocated.
>
> Signed-off-by: Ioan Nicu <ioan.nicu.ext@...ia.com>
Tested-by: Alexander Sverdlin <alexander.sverdlin@...ia.com>
> ---
> drivers/rapidio/devices/rio_mport_cdev.c | 19 +++++++++----------
> 1 file changed, 9 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/rapidio/devices/rio_mport_cdev.c b/drivers/rapidio/devices/rio_mport_cdev.c
> index 9d27016c899e..0434ab7b6497 100644
> --- a/drivers/rapidio/devices/rio_mport_cdev.c
> +++ b/drivers/rapidio/devices/rio_mport_cdev.c
> @@ -740,10 +740,7 @@ static int do_dma_request(struct mport_dma_req *req,
> tx->callback = dma_xfer_callback;
> tx->callback_param = req;
>
> - req->dmach = chan;
> - req->sync = sync;
> req->status = DMA_IN_PROGRESS;
> - init_completion(&req->req_comp);
> kref_get(&req->refcount);
>
> cookie = dmaengine_submit(tx);
> @@ -831,13 +828,20 @@ rio_dma_transfer(struct file *filp, u32 transfer_mode,
> if (!req)
> return -ENOMEM;
>
> - kref_init(&req->refcount);
> -
> ret = get_dma_channel(priv);
> if (ret) {
> kfree(req);
> return ret;
> }
> + chan = priv->dmach;
> +
> + kref_init(&req->refcount);
> + init_completion(&req->req_comp);
> + req->dir = dir;
> + req->filp = filp;
> + req->priv = priv;
> + req->dmach = chan;
> + req->sync = sync;
>
> /*
> * If parameter loc_addr != NULL, we are transferring data from/to
> @@ -925,11 +929,6 @@ rio_dma_transfer(struct file *filp, u32 transfer_mode,
> xfer->offset, xfer->length);
> }
>
> - req->dir = dir;
> - req->filp = filp;
> - req->priv = priv;
> - chan = priv->dmach;
> -
> nents = dma_map_sg(chan->device->dev,
> req->sgt.sgl, req->sgt.nents, dir);
> if (nents == 0) {
--
Best regards,
Alexander Sverdlin.
Powered by blists - more mailing lists