[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200420121513.219795685@linuxfoundation.org>
Date: Mon, 20 Apr 2020 14:38:38 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Hongwu Su <hongwus@...eaurora.org>,
Asutosh Das <asutoshd@...eaurora.org>,
Bean Huo <beanhuo@...ron.com>,
Stanley Chu <stanley.chu@...iatek.com>,
Can Guo <cang@...eaurora.org>,
"Martin K. Petersen" <martin.petersen@...cle.com>
Subject: [PATCH 5.6 24/71] scsi: ufs: Fix ufshcd_hold() caused scheduling while atomic
From: Can Guo <cang@...eaurora.org>
commit c63d6099a7959ecc919b2549dc6b71f53521f819 upstream.
The async version of ufshcd_hold(async == true), which is only called in
queuecommand path as for now, is expected to work in atomic context, thus
it should not sleep or schedule out. When it runs into the condition that
clocks are ON but link is still in hibern8 state, it should bail out
without flushing the clock ungate work.
Fixes: f2a785ac2312 ("scsi: ufshcd: Fix race between clk scaling and ungate work")
Link: https://lore.kernel.org/r/1581392451-28743-6-git-send-email-cang@codeaurora.org
Reviewed-by: Hongwu Su <hongwus@...eaurora.org>
Reviewed-by: Asutosh Das <asutoshd@...eaurora.org>
Reviewed-by: Bean Huo <beanhuo@...ron.com>
Reviewed-by: Stanley Chu <stanley.chu@...iatek.com>
Signed-off-by: Can Guo <cang@...eaurora.org>
Signed-off-by: Martin K. Petersen <martin.petersen@...cle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
drivers/scsi/ufs/ufshcd.c | 5 +++++
1 file changed, 5 insertions(+)
--- a/drivers/scsi/ufs/ufshcd.c
+++ b/drivers/scsi/ufs/ufshcd.c
@@ -1518,6 +1518,11 @@ start:
*/
if (ufshcd_can_hibern8_during_gating(hba) &&
ufshcd_is_link_hibern8(hba)) {
+ if (async) {
+ rc = -EAGAIN;
+ hba->clk_gating.active_reqs--;
+ break;
+ }
spin_unlock_irqrestore(hba->host->host_lock, flags);
flush_work(&hba->clk_gating.ungate_work);
spin_lock_irqsave(hba->host->host_lock, flags);
Powered by blists - more mailing lists