[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180716073507.676609755@linuxfoundation.org>
Date: Mon, 16 Jul 2018 09:36:33 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org,
Christian Black <christian.d.black@...el.com>,
Jon Derrick <jonathan.derrick@...el.com>,
Keith Busch <keith.busch@...el.com>,
Christoph Hellwig <hch@....de>,
Scott Bauer <scott.bauer@...el.com>
Subject: [PATCH 4.9 25/32] nvme-pci: Remap CMB SQ entries on every controller reset
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Keith Busch <keith.busch@...el.com>
commit 815c6704bf9f1c59f3a6be380a4032b9c57b12f1 upstream.
The controller memory buffer is remapped into a kernel address on each
reset, but the driver was setting the submission queue base address
only on the very first queue creation. The remapped address is likely to
change after a reset, so accessing the old address will hit a kernel bug.
This patch fixes that by setting the queue's CMB base address each time
the queue is created.
Fixes: f63572dff1421 ("nvme: unmap CMB and remove sysfs file in reset path")
Reported-by: Christian Black <christian.d.black@...el.com>
Cc: Jon Derrick <jonathan.derrick@...el.com>
Cc: <stable@...r.kernel.org> # 4.9+
Signed-off-by: Keith Busch <keith.busch@...el.com>
Reviewed-by: Christoph Hellwig <hch@....de>
Signed-off-by: Scott Bauer <scott.bauer@...el.com>
Reviewed-by: Jon Derrick <jonathan.derrick@...el.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
drivers/nvme/host/pci.c | 27 ++++++++++++++++-----------
1 file changed, 16 insertions(+), 11 deletions(-)
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -1034,17 +1034,15 @@ static int nvme_cmb_qdepth(struct nvme_d
static int nvme_alloc_sq_cmds(struct nvme_dev *dev, struct nvme_queue *nvmeq,
int qid, int depth)
{
- if (qid && dev->cmb && use_cmb_sqes && NVME_CMB_SQS(dev->cmbsz)) {
- unsigned offset = (qid - 1) * roundup(SQ_SIZE(depth),
- dev->ctrl.page_size);
- nvmeq->sq_dma_addr = dev->cmb_bus_addr + offset;
- nvmeq->sq_cmds_io = dev->cmb + offset;
- } else {
- nvmeq->sq_cmds = dma_alloc_coherent(dev->dev, SQ_SIZE(depth),
- &nvmeq->sq_dma_addr, GFP_KERNEL);
- if (!nvmeq->sq_cmds)
- return -ENOMEM;
- }
+
+ /* CMB SQEs will be mapped before creation */
+ if (qid && dev->cmb && use_cmb_sqes && NVME_CMB_SQS(dev->cmbsz))
+ return 0;
+
+ nvmeq->sq_cmds = dma_alloc_coherent(dev->dev, SQ_SIZE(depth),
+ &nvmeq->sq_dma_addr, GFP_KERNEL);
+ if (!nvmeq->sq_cmds)
+ return -ENOMEM;
return 0;
}
@@ -1117,6 +1115,13 @@ static int nvme_create_queue(struct nvme
struct nvme_dev *dev = nvmeq->dev;
int result;
+ if (qid && dev->cmb && use_cmb_sqes && NVME_CMB_SQS(dev->cmbsz)) {
+ unsigned offset = (qid - 1) * roundup(SQ_SIZE(nvmeq->q_depth),
+ dev->ctrl.page_size);
+ nvmeq->sq_dma_addr = dev->cmb_bus_addr + offset;
+ nvmeq->sq_cmds_io = dev->cmb + offset;
+ }
+
nvmeq->cq_vector = qid - 1;
result = adapter_alloc_cq(dev, qid, nvmeq);
if (result < 0)
Powered by blists - more mailing lists