[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190129113143.290397932@linuxfoundation.org>
Date: Tue, 29 Jan 2019 12:36:36 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Max Gurtovoy <maxg@...lanox.com>,
Christoph Hellwig <hch@....de>,
Raju Rangoju <rajur@...lsio.com>,
Sagi Grimberg <sagi@...mberg.me>, Jens Axboe <axboe@...nel.dk>
Subject: [PATCH 4.9 41/44] nvmet-rdma: fix null dereference under heavy load
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Raju Rangoju <rajur@...lsio.com>
commit 5cbab6303b4791a3e6713dfe2c5fda6a867f9adc upstream.
Under heavy load if we don't have any pre-allocated rsps left, we
dynamically allocate a rsp, but we are not actually allocating memory
for nvme_completion (rsp->req.rsp). In such a case, accessing pointer
fields (req->rsp->status) in nvmet_req_init() will result in crash.
To fix this, allocate the memory for nvme_completion by calling
nvmet_rdma_alloc_rsp()
Fixes: 8407879c("nvmet-rdma:fix possible bogus dereference under heavy load")
Cc: <stable@...r.kernel.org>
Reviewed-by: Max Gurtovoy <maxg@...lanox.com>
Reviewed-by: Christoph Hellwig <hch@....de>
Signed-off-by: Raju Rangoju <rajur@...lsio.com>
Signed-off-by: Sagi Grimberg <sagi@...mberg.me>
Signed-off-by: Jens Axboe <axboe@...nel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
drivers/nvme/target/rdma.c | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -137,6 +137,10 @@ static void nvmet_rdma_recv_done(struct
static void nvmet_rdma_read_data_done(struct ib_cq *cq, struct ib_wc *wc);
static void nvmet_rdma_qp_event(struct ib_event *event, void *priv);
static void nvmet_rdma_queue_disconnect(struct nvmet_rdma_queue *queue);
+static void nvmet_rdma_free_rsp(struct nvmet_rdma_device *ndev,
+ struct nvmet_rdma_rsp *r);
+static int nvmet_rdma_alloc_rsp(struct nvmet_rdma_device *ndev,
+ struct nvmet_rdma_rsp *r);
static struct nvmet_fabrics_ops nvmet_rdma_ops;
@@ -175,9 +179,17 @@ nvmet_rdma_get_rsp(struct nvmet_rdma_que
spin_unlock_irqrestore(&queue->rsps_lock, flags);
if (unlikely(!rsp)) {
- rsp = kmalloc(sizeof(*rsp), GFP_KERNEL);
+ int ret;
+
+ rsp = kzalloc(sizeof(*rsp), GFP_KERNEL);
if (unlikely(!rsp))
return NULL;
+ ret = nvmet_rdma_alloc_rsp(queue->dev, rsp);
+ if (unlikely(ret)) {
+ kfree(rsp);
+ return NULL;
+ }
+
rsp->allocated = true;
}
@@ -190,6 +202,7 @@ nvmet_rdma_put_rsp(struct nvmet_rdma_rsp
unsigned long flags;
if (unlikely(rsp->allocated)) {
+ nvmet_rdma_free_rsp(rsp->queue->dev, rsp);
kfree(rsp);
return;
}
Powered by blists - more mailing lists