[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aRcVWtE4fHCe3jjM@fedora>
Date: Fri, 14 Nov 2025 19:41:14 +0800
From: Ming Lei <ming.lei@...hat.com>
To: Mohamed Khalfella <mkhalfella@...estorage.com>
Cc: Jens Axboe <axboe@...nel.dk>, Keith Busch <kbusch@...nel.org>,
Sagi Grimberg <sagi@...mberg.me>,
Chaitanya Kulkarni <kch@...dia.com>,
Casey Chen <cachen@...estorage.com>,
Vikas Manocha <vmanocha@...estorage.com>,
Yuanyuan Zhong <yzhong@...estorage.com>,
Hannes Reinecke <hare@...e.de>, linux-nvme@...ts.infradead.org,
linux-block@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] nvme: Convert tag_list mutex to rwsemaphore to avoid
deadlock
On Thu, Nov 13, 2025 at 12:23:20PM -0800, Mohamed Khalfella wrote:
> blk_mq_{add,del}_queue_tag_set() functions add and remove queues from
> tagset, the functions make sure that tagset and queues are marked as
> shared when two or more queues are attached to the same tagset.
> Initially a tagset starts as unshared and when the number of added
> queues reaches two, blk_mq_add_queue_tag_set() marks it as shared along
> with all the queues attached to it. When the number of attached queues
> drops to 1 blk_mq_del_queue_tag_set() need to mark both the tagset and
> the remaining queues as unshared.
>
> Both functions need to freeze current queues in tagset before setting on
> unsetting BLK_MQ_F_TAG_QUEUE_SHARED flag. While doing so, both functions
> hold set->tag_list_lock mutex, which makes sense as we do not want
> queues to be added or deleted in the process. This used to work fine
> until commit 98d81f0df70c ("nvme: use blk_mq_[un]quiesce_tagset")
> made the nvme driver quiesce tagset instead of quiscing individual
> queues. blk_mq_quiesce_tagset() does the job and quiesce the queues in
> set->tag_list while holding set->tag_list_lock also.
>
> This results in deadlock between two threads with these stacktraces:
>
> __schedule+0x48e/0xed0
> schedule+0x5a/0xc0
> schedule_preempt_disabled+0x11/0x20
> __mutex_lock.constprop.0+0x3cc/0x760
> blk_mq_quiesce_tagset+0x26/0xd0
> nvme_dev_disable_locked+0x77/0x280 [nvme]
> nvme_timeout+0x268/0x320 [nvme]
> blk_mq_handle_expired+0x5d/0x90
> bt_iter+0x7e/0x90
> blk_mq_queue_tag_busy_iter+0x2b2/0x590
> ? __blk_mq_complete_request_remote+0x10/0x10
> ? __blk_mq_complete_request_remote+0x10/0x10
> blk_mq_timeout_work+0x15b/0x1a0
> process_one_work+0x133/0x2f0
> ? mod_delayed_work_on+0x90/0x90
> worker_thread+0x2ec/0x400
> ? mod_delayed_work_on+0x90/0x90
> kthread+0xe2/0x110
> ? kthread_complete_and_exit+0x20/0x20
> ret_from_fork+0x2d/0x50
> ? kthread_complete_and_exit+0x20/0x20
> ret_from_fork_asm+0x11/0x20
>
> __schedule+0x48e/0xed0
> schedule+0x5a/0xc0
> blk_mq_freeze_queue_wait+0x62/0x90
> ? destroy_sched_domains_rcu+0x30/0x30
> blk_mq_exit_queue+0x151/0x180
> disk_release+0xe3/0xf0
> device_release+0x31/0x90
> kobject_put+0x6d/0x180
> nvme_scan_ns+0x858/0xc90 [nvme_core]
> ? nvme_scan_work+0x281/0x560 [nvme_core]
> nvme_scan_work+0x281/0x560 [nvme_core]
> process_one_work+0x133/0x2f0
> ? mod_delayed_work_on+0x90/0x90
> worker_thread+0x2ec/0x400
> ? mod_delayed_work_on+0x90/0x90
> kthread+0xe2/0x110
> ? kthread_complete_and_exit+0x20/0x20
> ret_from_fork+0x2d/0x50
> ? kthread_complete_and_exit+0x20/0x20
> ret_from_fork_asm+0x11/0x20
It is one AB-BA deadlock, lockdep should have complained it, but nvme doesn't
support owned freeze queue.
Maybe the following change can avoid it?
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index c916176bd9f0..9967c4a7e72d 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -3004,6 +3004,7 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown)
bool dead;
mutex_lock(&dev->shutdown_lock);
+ nvme_quiesce_io_queues(&dev->ctrl);
dead = nvme_pci_ctrl_is_dead(dev);
if (state == NVME_CTRL_LIVE || state == NVME_CTRL_RESETTING) {
if (pci_is_enabled(pdev))
@@ -3016,8 +3017,6 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown)
nvme_wait_freeze_timeout(&dev->ctrl, NVME_IO_TIMEOUT);
}
- nvme_quiesce_io_queues(&dev->ctrl);
-
if (!dead && dev->ctrl.queue_count > 0) {
nvme_delete_io_queues(dev);
nvme_disable_ctrl(&dev->ctrl, shutdown);
Thanks,
Ming
Powered by blists - more mailing lists