lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <08c4bb7e-830e-c9e6-2537-18131c7e0fc6@acm.org>
Date:   Mon, 23 Aug 2021 09:15:34 -0700
From:   Bart Van Assche <bvanassche@....org>
To:     lijinlin3@...wei.com, jejb@...ux.ibm.com,
        martin.petersen@...cle.com, linux-scsi@...r.kernel.org,
        linux-kernel@...r.kernel.org
Cc:     john.garry@...wei.com, qiulaibin@...wei.com, linfeilong@...wei.com,
        wubo40@...wei.com
Subject: Re: [PATCH v2] scsi: core: Fix hang of freezing queue between
 blocking and running device

On 8/9/21 7:13 AM, lijinlin3@...wei.com wrote:
> From: Li Jinlin <lijinlin3@...wei.com>
> 
> We found a hang issue, the test steps are as follows:
>    1. blocking device via scsi_device_set_state()
>    2. dd if=/dev/sda of=/mnt/t.log bs=1M count=10
>    3. echo none > /sys/block/sda/queue/scheduler
>    4. echo "running" >/sys/block/sda/device/state
> 
> Step 3 and 4 should finish this work after step 4, but they hangs.
> 
>    CPU#0               CPU#1                CPU#2
>    ---------------     ----------------     ----------------
>                                             Step 1: blocking device
> 
>                                             Step 2: dd xxxx
>                                                    ^^^^^^ get request
>                                                           q_usage_counter++
> 
>                        Step 3: switching scheculer
>                        elv_iosched_store
>                          elevator_switch
>                            blk_mq_freeze_queue
>                              blk_freeze_queue
>                                > blk_freeze_queue_start
>                                  ^^^^^^ mq_freeze_depth++
> 
>                                > blk_mq_run_hw_queues
>                                  ^^^^^^ can't run queue when dev blocked
> 
>                                > blk_mq_freeze_queue_wait
>                                  ^^^^^^ Hang here!!!
>                                         wait q_usage_counter==0
> 
>    Step 4: running device
>    store_state_field
>      scsi_rescan_device
>        scsi_attach_vpd
>          scsi_vpd_inquiry
>            __scsi_execute
>              blk_get_request
>                blk_mq_alloc_request
>                  blk_queue_enter
>                  ^^^^^^ Hang here!!!
>                         wait mq_freeze_depth==0
> 
>      blk_mq_run_hw_queues
>      ^^^^^^ dispatch IO, q_usage_counter will reduce to zero
> 
>                              blk_mq_unfreeze_queue
>                              ^^^^^ mq_freeze_depth--
> 
> Step 3 and 4 wait for each other.
> 
> To fix this, we need to run queue before rescanning device when the device
> state changes to SDEV_RUNNING.
> 
> Fixes: f0f82e2476f6 ("scsi: core: Fix capacity set to zero after offlinining device")
> Signed-off-by: Li Jinlin <lijinlin3@...wei.com>
> Signed-off-by: Qiu Laibin <qiulaibin@...wei.com>
> ---
> changes since v1 send with Message-ID:
> 20210805143231.1713299-1-lijinlin3@...wei.com
> 
>   - Modify the subject to make it distinct
>   - Modify the message to fix typo and make it distinct
>   - Reduce the number of SOB
> 
>   drivers/scsi/scsi_sysfs.c | 6 +++---
>   1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c
> index c3a710bceba0..aa701582c950 100644
> --- a/drivers/scsi/scsi_sysfs.c
> +++ b/drivers/scsi/scsi_sysfs.c
> @@ -809,12 +809,12 @@ store_state_field(struct device *dev, struct device_attribute *attr,
>   	ret = scsi_device_set_state(sdev, state);
>   	/*
>   	 * If the device state changes to SDEV_RUNNING, we need to
> -	 * rescan the device to revalidate it, and run the queue to
> -	 * avoid I/O hang.
> +	 * run the queue to avoid I/O hang, and rescan the device
> +	 * to revalidate it.
>   	 */
>   	if (ret == 0 && state == SDEV_RUNNING) {
> -		scsi_rescan_device(dev);
>   		blk_mq_run_hw_queues(sdev->request_queue, true);
> +		scsi_rescan_device(dev);
>   	}
>   	mutex_unlock(&sdev->state_mutex);

The patch looks fine to me but I think the comment in 
store_state_field() should be expanded. Although the description in the 
commit message makes it clear how I/O may hang, that is not clear from 
the source code comment. Please mention in the comment that running the 
queue first is necessary because another thread may be waiting inside 
blk_mq_freeze_queue_wait() and because that call may be waiting for 
pending I/O to finish.

Thanks,

Bart.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ