[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1909171121300.151243@chino.kir.corp.google.com>
Date: Tue, 17 Sep 2019 11:23:10 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Christoph Hellwig <hch@....de>, Keith Busch <kbusch@...nel.org>,
Jens Axboe <axboe@...nel.dk>
cc: Tom Lendacky <thomas.lendacky@....com>,
Brijesh Singh <brijesh.singh@....com>,
Ming Lei <ming.lei@...hat.com>,
Peter Gonda <pgonda@...gle.com>,
Jianxiong Gao <jxgao@...gle.com>, linux-kernel@...r.kernel.org,
x86@...nel.org, iommu@...ts.linux-foundation.org
Subject: Re: [bug] __blk_mq_run_hw_queue suspicious rcu usage
On Mon, 16 Sep 2019, David Rientjes wrote:
> Brijesh and Tom, we currently hit this any time we boot an SEV enabled
> Ubuntu 18.04 guest; I assume that guest kernels, especially those of such
> major distributions, are expected to work with warnings and BUGs when
> certain drivers are enabled.
>
> If the vmap purge lock is to remain a mutex (any other reason that
> unmapping aliases can block?) then it appears that allocating a dmapool
> is the only alternative. Is this something that you'll be addressing
> generically or do we need to get buy-in from the maintainers of this
> specific driver?
>
We've found that the following applied on top of 5.2.14 suppresses the
warnings.
Christoph, Keith, Jens, is this something that we could do for the nvme
driver? I'll happily propose it formally if it would be acceptable.
Thanks!
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -1613,7 +1613,8 @@ static int nvme_alloc_admin_tags(struct nvme_dev *dev)
dev->admin_tagset.timeout = ADMIN_TIMEOUT;
dev->admin_tagset.numa_node = dev_to_node(dev->dev);
dev->admin_tagset.cmd_size = sizeof(struct nvme_iod);
- dev->admin_tagset.flags = BLK_MQ_F_NO_SCHED;
+ dev->admin_tagset.flags = BLK_MQ_F_NO_SCHED |
+ BLK_MQ_F_BLOCKING;
dev->admin_tagset.driver_data = dev;
if (blk_mq_alloc_tag_set(&dev->admin_tagset))
@@ -2262,7 +2263,8 @@ static int nvme_dev_add(struct nvme_dev *dev)
dev->tagset.queue_depth =
min_t(int, dev->q_depth, BLK_MQ_MAX_DEPTH) - 1;
dev->tagset.cmd_size = sizeof(struct nvme_iod);
- dev->tagset.flags = BLK_MQ_F_SHOULD_MERGE;
+ dev->tagset.flags = BLK_MQ_F_SHOULD_MERGE |
+ BLK_MQ_F_BLOCKING;
dev->tagset.driver_data = dev;
ret = blk_mq_alloc_tag_set(&dev->tagset);
Powered by blists - more mailing lists