lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220214095107.3t5en5a3tosaeoo6@ipetronik.com>
Date:   Mon, 14 Feb 2022 10:51:07 +0100
From:   Markus Blöchl <markus.bloechl@...tronik.com>
To:     Keith Busch <kbusch@...nel.org>, Jens Axboe <axboe@...nel.dk>
Cc:     Christoph Hellwig <hch@....de>, Sagi Grimberg <sagi@...mberg.me>,
        linux-nvme@...ts.infradead.org, linux-block@...r.kernel.org,
        linux-kernel@...r.kernel.org, Stefan Roese <sr@...x.de>
Subject: [RFC PATCH] nvme: prevent hang on surprise removal of NVMe disk

After the surprise removal of a mounted NVMe disk the pciehp task
reliably hangs forever with a trace similar to this one:

 INFO: task irq/43-pciehp:64 blocked for more than 120 seconds.
 Call Trace:
  <TASK>
  __bio_queue_enter
  blk_mq_submit_bio
  submit_bio_noacct
  submit_bio_wait
  blkdev_issue_flush
  ext4_sync_fs
  sync_filesystem
  fsync_bdev
  delete_partition
  blk_drop_partitions
  del_gendisk
  nvme_ns_remove
  nvme_remove_namespaces
  nvme_remove
  pci_device_remove
  __device_release_driver
  device_release_driver
  pci_stop_bus_device
  pci_stop_and_remove_bus_device
  pciehp_unconfigure_device
  pciehp_disable_slot
  pciehp_handle_presence_or_link_change
  pciehp_ist
  </TASK>

I observed this with 5.15.5 from debian bullseye-backports and confirmed
with 5.17.0-rc3 but previous kernels may be affected as well.

I read that del_gendisk() prevents any new I/O only after
flushing and dropping all partitions.
But in case of a surprise removal any new blocking I/O must be prevented
first. I assume that nvme_set_queue_dying() is supposed to do that.
Is there any other mechanism in place which should achieve this?

Unfortunately I am not very familiar with the blk_mq infrastructure so
any comments and suggestions are very welcome.

Best regards,

Markus


Signed-off-by: Markus Blöchl <markus.bloechl@...tronik.com>
---
 drivers/nvme/host/core.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 961a5f8a44d2..0654cbe9b80e 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -4573,6 +4573,8 @@ static void nvme_set_queue_dying(struct nvme_ns *ns)
 	if (test_and_set_bit(NVME_NS_DEAD, &ns->flags))
 		return;
 
+	set_bit(GD_DEAD, &ns->disk->state);
+
 	blk_set_queue_dying(ns->queue);
 	nvme_start_ns_queue(ns);
 

base-commit: f1baf68e1383f6ed93eb9cff2866d46562607a43
-- 
2.35.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ