[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <201a7e9e-4782-4f71-a73b-9d58a51ee8ec@acm.org>
Date: Thu, 4 Dec 2025 13:22:49 -1000
From: Bart Van Assche <bvanassche@....org>
To: Keith Busch <kbusch@...nel.org>
Cc: Mohamed Khalfella <mkhalfella@...estorage.com>,
Chaitanya Kulkarni <kch@...dia.com>, Christoph Hellwig <hch@....de>,
Jens Axboe <axboe@...nel.dk>, Sagi Grimberg <sagi@...mberg.me>,
Casey Chen <cachen@...estorage.com>, Yuanyuan Zhong
<yzhong@...estorage.com>, Hannes Reinecke <hare@...e.de>,
Ming Lei <ming.lei@...hat.com>, Waiman Long <llong@...hat.com>,
Hillf Danton <hdanton@...a.com>, linux-nvme@...ts.infradead.org,
linux-block@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/1] block: Use RCU in blk_mq_[un]quiesce_tagset() instead
of set->tag_list_lock
On 12/4/25 11:26 AM, Keith Busch wrote:
> On Thu, Dec 04, 2025 at 10:24:03AM -1000, Bart Van Assche wrote:>> Hence, the deadlock can be
>> solved by removing the blk_mq_quiesce_tagset() call from nvme_timeout()
>> and by failing I/O from inside nvme_timeout(). If nvme_timeout() fails
>> I/O and does not call blk_mq_quiesce_tagset() then the
>> blk_mq_freeze_queue_wait() call will finish instead of triggering a
>> deadlock. However, I do not know whether this proposal seems acceptable
>> to the NVMe maintainers.
>
> You periodically make this suggestion, but there's never a reason
> offered to introduce yet another work queue for the driver to
> synchronize with at various points. The whole point of making blk-mq
> timeout handler in a work queue (it used to be a timer) was so that we
> could do blocking actions like this.
Hi Keith,
The blk_mq_quiesce_tagset() call from the NVMe timeout handler is
unfortunate because it triggers a deadlock with
blk_mq_update_tag_set_shared().
I proposed to modify the NVMe driver because I think that's a better
approach than introducing a new synchronize_rcu() call in the block
layer core.
However, there may be better approaches for fixing this in the NVMe
driver than what I proposed so far.
Thanks,
Bart.
Powered by blists - more mailing lists