lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 5 Aug 2021 22:32:31 +0800
From:   <lijinlin3@...wei.com>
To:     <jejb@...ux.ibm.com>, <martin.petersen@...cle.com>,
        <linux-scsi@...r.kernel.org>, <linux-kernel@...r.kernel.org>
CC:     <john.garry@...wei.com>, <bvanassche@....org>,
        <qiulaibin@...wei.com>, <linfeilong@...wei.com>,
        <wubo40@...wei.com>, <lijinlin3@...wei.com>
Subject: [PATCH] scsi: core: Run queue first after running device.

From: Li Jinlin <lijinlin3@...wei.com>

We found a hang issue, the test steps are as follows:
  1. echo "blocked" >/sys/block/sda/device/state
  2. dd if=/dev/sda of=/mnt/t.log bs=1M count=10
  3. echo none > /sys/block/sda/queue/scheduler
  4. echo "running" >/sys/block/sda/device/state

Step3 and Step4 should finish this work after Step4, but them hangs.

  CPU#0               CPU#1                CPU#2
  ---------------     ----------------     ----------------
                                           Step1: blocking device

                                           Step2: dd xxxx
                                                  ^^^^^^ get request
                                                         q_usage_counter++

                      Step3: switching scheculer
                      elv_iosched_store
                        elevator_switch
                          blk_mq_freeze_queue
                            blk_freeze_queue
                              > blk_freeze_queue_start
                                ^^^^^^ mq_freeze_depth++

                              > blk_mq_run_hw_queues
                                ^^^^^^ can't run queue when dev blocked

                              > blk_mq_freeze_queue_wait
                                ^^^^^^ Hang here!!! 
                                       wait q_usage_counter==0

  Step4: running device
  store_state_field
    scsi_rescan_device
      scsi_attach_vpd
        scsi_vpd_inquiry
          __scsi_execute
            blk_get_request
              blk_mq_alloc_request
                blk_queue_enter
                ^^^^^^ Hang here!!!
                       wait mq_freeze_depth==0 

    blk_mq_run_hw_queues                                           
    ^^^^^^ dispatch IO, q_usage_counter will reduce to zero

                            blk_mq_unfreeze_queue
                            ^^^^^ mq_freeze_depth--

Step3 and Step4 wait for each other, caused hangs.

This requires run queue frist to fix this issue when the device state
changes to SDEV_RUNNING.

Fixes: f0f82e2476f6 ("scsi: core: Fix capacity set to zero after offlinining device")
Signed-off-by: Li Jinlin <lijinlin3@...wei.com>
Signed-off-by: Qiu Laibin <qiulaibin@...wei.com>
Signed-off-by: Wu Bo <wubo40@...wei.com>
---
 drivers/scsi/scsi_sysfs.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c
index c3a710bceba0..aa701582c950 100644
--- a/drivers/scsi/scsi_sysfs.c
+++ b/drivers/scsi/scsi_sysfs.c
@@ -809,12 +809,12 @@ store_state_field(struct device *dev, struct device_attribute *attr,
 	ret = scsi_device_set_state(sdev, state);
 	/*
 	 * If the device state changes to SDEV_RUNNING, we need to
-	 * rescan the device to revalidate it, and run the queue to
-	 * avoid I/O hang.
+	 * run the queue to avoid I/O hang, and rescan the device
+	 * to revalidate it.
 	 */
 	if (ret == 0 && state == SDEV_RUNNING) {
-		scsi_rescan_device(dev);
 		blk_mq_run_hw_queues(sdev->request_queue, true);
+		scsi_rescan_device(dev);
 	}
 	mutex_unlock(&sdev->state_mutex);
 
-- 
2.27.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ