[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200803100448.2738-1-stanley.chu@mediatek.com>
Date: Mon, 3 Aug 2020 18:04:48 +0800
From: Stanley Chu <stanley.chu@...iatek.com>
To: <linux-scsi@...r.kernel.org>, <martin.petersen@...cle.com>,
<avri.altman@....com>, <alim.akhtar@...sung.com>,
<jejb@...ux.ibm.com>, <cang@...eaurora.org>, <bvanassche@....org>
CC: <beanhuo@...ron.com>, <asutoshd@...eaurora.org>,
<matthias.bgg@...il.com>, <linux-mediatek@...ts.infradead.org>,
<linux-arm-kernel@...ts.infradead.org>,
<linux-kernel@...r.kernel.org>, <kuohong.wang@...iatek.com>,
<peter.wang@...iatek.com>, <chun-hung.wu@...iatek.com>,
<andy.teng@...iatek.com>, <chaotian.jing@...iatek.com>,
<cc.chou@...iatek.com>, <jiajie.hao@...iatek.com>,
Stanley Chu <stanley.chu@...iatek.com>
Subject: [PATCH v7] scsi: ufs: Quiesce all scsi devices before shutdown
Currently I/O request could be still submitted to UFS device while
UFS is working on shutdown flow. This may lead to racing as below
scenarios and finally system may crash due to unclocked register
accesses.
To fix this kind of issues, in ufshcd_shutdown(),
1. Use pm_runtime_get_sync() instead of resuming UFS device by
ufshcd_runtime_resume() "internally" to let runtime PM framework
manage and prevent concurrent runtime operations by incoming I/O
requests.
2. Specifically quiesce all SCSI devices to block all I/O requests
after device is resumed.
Example of racing scenario: While UFS device is runtime-suspended
Thread #1: Executing UFS shutdown flow, e.g.,
ufshcd_suspend(UFS_SHUTDOWN_PM)
Thread #2: Executing runtime resume flow triggered by I/O request,
e.g., ufshcd_resume(UFS_RUNTIME_PM)
This breaks the assumption that UFS PM flows can not be running
concurrently and some unexpected racing behavior may happen.
Signed-off-by: Stanley Chu <stanley.chu@...iatek.com>
---
Changes:
- Since v6:
- Do quiesce to all SCSI devices.
- Since v4:
- Use pm_runtime_get_sync() instead of resuming UFS device by ufshcd_runtime_resume() "internally".
---
drivers/scsi/ufs/ufshcd.c | 27 ++++++++++++++++++++++-----
1 file changed, 22 insertions(+), 5 deletions(-)
diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
index 307622284239..7cb220b3fde0 100644
--- a/drivers/scsi/ufs/ufshcd.c
+++ b/drivers/scsi/ufs/ufshcd.c
@@ -8640,6 +8640,7 @@ EXPORT_SYMBOL(ufshcd_runtime_idle);
int ufshcd_shutdown(struct ufs_hba *hba)
{
int ret = 0;
+ struct scsi_target *starget;
if (!hba->is_powered)
goto out;
@@ -8647,11 +8648,27 @@ int ufshcd_shutdown(struct ufs_hba *hba)
if (ufshcd_is_ufs_dev_poweroff(hba) && ufshcd_is_link_off(hba))
goto out;
- if (pm_runtime_suspended(hba->dev)) {
- ret = ufshcd_runtime_resume(hba);
- if (ret)
- goto out;
- }
+ /*
+ * Let runtime PM framework manage and prevent concurrent runtime
+ * operations with shutdown flow.
+ */
+ pm_runtime_get_sync(hba->dev);
+
+ /*
+ * Quiesce all SCSI devices to prevent any non-PM requests sending
+ * from block layer during and after shutdown.
+ *
+ * Here we can not use blk_cleanup_queue() since PM requests
+ * (with BLK_MQ_REQ_PREEMPT flag) are still required to be sent
+ * through block layer. Therefore SCSI command queued after the
+ * scsi_target_quiesce() call returned will block until
+ * blk_cleanup_queue() is called.
+ *
+ * Besides, scsi_target_"un"quiesce (e.g., scsi_target_resume) can
+ * be ignored since shutdown is one-way flow.
+ */
+ list_for_each_entry(starget, &hba->host->__targets, siblings)
+ scsi_target_quiesce(starget);
ret = ufshcd_suspend(hba, UFS_SHUTDOWN_PM);
out:
--
2.18.0
Powered by blists - more mailing lists