lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <efc80348-46c0-4307-a363-a242a7b44d94@quicinc.com>
Date: Mon, 24 Jun 2024 17:56:52 +0800
From: Ziqi Chen <quic_ziqichen@...cinc.com>
To: Bart Van Assche <bvanassche@....org>, <quic_cang@...cinc.com>,
        <mani@...nel.org>, <beanhuo@...ron.com>, <avri.altman@....com>,
        <junwoo80.lee@...sung.com>, <martin.petersen@...cle.com>,
        <quic_nguyenb@...cinc.com>, <quic_nitirawa@...cinc.com>,
        <quic_rampraka@...cinc.com>
CC: <linux-scsi@...r.kernel.org>, Alim Akhtar <alim.akhtar@...sung.com>,
        "James E.J. Bottomley" <jejb@...ux.ibm.com>,
        Peter Wang
	<peter.wang@...iatek.com>,
        Manivannan Sadhasivam
	<manivannan.sadhasivam@...aro.org>,
        Maramaina Naresh
	<quic_mnaresh@...cinc.com>,
        Asutosh Das <quic_asutoshd@...cinc.com>,
        "open
 list" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] scsi: ufs: core: quiesce request queues before check
 pending cmds

On 6/21/2024 4:57 AM, Bart Van Assche wrote:
> On 6/7/24 3:06 AM, Ziqi Chen wrote:
>> Fix this race condition by quiescing the request queues before calling
>> ufshcd_pending_cmds() so that block layer won't touch the budget map
>> when ufshcd_pending_cmds() is working on it. In addition, remove the
>> scsi layer blocking/unblocking to reduce redundancies and latencies.
> 
> Can you please help with testing whether the patch below would be a good
> alternative to your patch (compile-tested only)?
> 
> Thanks,
> 
> Bart.

Hi Bart,
Compile-tested is OK, but I don't think it is a better alternative way.

1. Why do we need to call blk_mq_quiesce_tagset() into 
ufshcd_scsi_block_requests() instead directly replace all 
ufshcd_scsi_block_requests() with blk_mq_quiesce_tagset()?

2. This patch need to to do long-term stress test, I think many OEMs 
can't wait as it is a blocker issue for them.

BRs
Ziqi

> 
> diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
> index aa00978c6c0e..1d981283b03c 100644
> --- a/drivers/ufs/core/ufshcd.c
> +++ b/drivers/ufs/core/ufshcd.c
> @@ -332,14 +332,12 @@ static void ufshcd_configure_wb(struct ufs_hba *hba)
> 
>   static void ufshcd_scsi_unblock_requests(struct ufs_hba *hba)
>   {
> -    if (atomic_dec_and_test(&hba->scsi_block_reqs_cnt))
> -        scsi_unblock_requests(hba->host);
> +    blk_mq_quiesce_tagset(&hba->host->tag_set);
>   }
> 
>   static void ufshcd_scsi_block_requests(struct ufs_hba *hba)
>   {
> -    if (atomic_inc_return(&hba->scsi_block_reqs_cnt) == 1)
> -        scsi_block_requests(hba->host);
> +    blk_mq_unquiesce_tagset(&hba->host->tag_set);
>   }
> 
>   static void ufshcd_add_cmd_upiu_trace(struct ufs_hba *hba, unsigned 
> int tag,
> @@ -10590,7 +10588,7 @@ int ufshcd_init(struct ufs_hba *hba, void 
> __iomem *mmio_base, unsigned int irq)
> 
>       /* Hold auto suspend until async scan completes */
>       pm_runtime_get_sync(dev);
> -    atomic_set(&hba->scsi_block_reqs_cnt, 0);
> +
>       /*
>        * We are assuming that device wasn't put in sleep/power-down
>        * state exclusively during the boot stage before kernel.
> diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h
> index 443afb97a637..58705994fc46 100644
> --- a/include/ufs/ufshcd.h
> +++ b/include/ufs/ufshcd.h
> @@ -889,7 +889,6 @@ enum ufshcd_mcq_opr {
>    * @wb_mutex: used to serialize devfreq and sysfs write booster toggling
>    * @clk_scaling_lock: used to serialize device commands and clock scaling
>    * @desc_size: descriptor sizes reported by device
> - * @scsi_block_reqs_cnt: reference counting for scsi block requests
>    * @bsg_dev: struct device associated with the BSG queue
>    * @bsg_queue: BSG queue associated with the UFS controller
>    * @rpm_dev_flush_recheck_work: used to suspend from RPM (runtime power
> @@ -1050,7 +1049,6 @@ struct ufs_hba {
> 
>       struct mutex wb_mutex;
>       struct rw_semaphore clk_scaling_lock;
> -    atomic_t scsi_block_reqs_cnt;
> 
>       struct device        bsg_dev;
>       struct request_queue    *bsg_queue;
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ