lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20181031230754.29029-80-sashal@kernel.org>
Date:   Wed, 31 Oct 2018 19:07:08 -0400
From:   Sasha Levin <sashal@...nel.org>
To:     stable@...r.kernel.org, linux-kernel@...r.kernel.org
Cc:     Evan Green <evgreen@...omium.org>,
        "Martin K . Petersen" <martin.petersen@...cle.com>,
        Sasha Levin <sashal@...nel.org>
Subject: [PATCH AUTOSEL 4.18 080/126] scsi: ufs: Schedule clk gating work on correct queue

From: Evan Green <evgreen@...omium.org>

[ Upstream commit f4bb7704699beee9edfbee875daa9089c86cf724 ]

With commit 10e5e37581fc ("scsi: ufs: Add clock ungating to a separate
workqueue"), clock gating work was moved to a separate work queue with
WQ_MEM_RECLAIM set, since clock gating could occur from a memory reclaim
context. Unfortunately, clk_gating.gate_work was left queued via
schedule_delayed_work, which is a system workqueue that does not have
WQ_MEM_RECLAIM set.  Because ufshcd_ungate_work attempts to cancel
gate_work, the following warning appears:

[   14.174170] workqueue: WQ_MEM_RECLAIM ufs_clk_gating_0:ufshcd_ungate_work is flushing !WQ_MEM_RECLAIM events:ufshcd_gate_work
[   14.174179] WARNING: CPU: 4 PID: 173 at kernel/workqueue.c:2440 check_flush_dependency+0x110/0x118
[   14.205725] CPU: 4 PID: 173 Comm: kworker/u16:3 Not tainted 4.14.68 #1
[   14.212437] Hardware name: Google Cheza (rev1) (DT)
[   14.217459] Workqueue: ufs_clk_gating_0 ufshcd_ungate_work
[   14.223107] task: ffffffc0f6a40080 task.stack: ffffff800a490000
[   14.229195] PC is at check_flush_dependency+0x110/0x118
[   14.234569] LR is at check_flush_dependency+0x110/0x118
[   14.239944] pc : [<ffffff80080cad14>] lr : [<ffffff80080cad14>] pstate: 60c001c9
[   14.333050] Call trace:
[   14.427767] [<ffffff80080cad14>] check_flush_dependency+0x110/0x118
[   14.434219] [<ffffff80080cafec>] start_flush_work+0xac/0x1fc
[   14.440046] [<ffffff80080caeec>] flush_work+0x40/0x94
[   14.445246] [<ffffff80080cb288>] __cancel_work_timer+0x11c/0x1b8
[   14.451433] [<ffffff80080cb4b8>] cancel_delayed_work_sync+0x20/0x30
[   14.457886] [<ffffff80085b9294>] ufshcd_ungate_work+0x24/0xd0
[   14.463800] [<ffffff80080cfb04>] process_one_work+0x32c/0x690
[   14.469713] [<ffffff80080d0154>] worker_thread+0x218/0x338
[   14.475361] [<ffffff80080d527c>] kthread+0x120/0x130
[   14.480470] [<ffffff8008084814>] ret_from_fork+0x10/0x18

The simple solution is to put the gate_work on the same WQ_MEM_RECLAIM
work queue as the ungate_work.

Fixes: 10e5e37581fc ("scsi: ufs: Add clock ungating to a separate workqueue")
Signed-off-by: Evan Green <evgreen@...omium.org>
Reviewed-by: Douglas Anderson <dianders@...omium.org>
Reviewed-by: Stephen Boyd <swboyd@...omium.org>
Signed-off-by: Martin K. Petersen <martin.petersen@...cle.com>
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
 drivers/scsi/ufs/ufshcd.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
index 397081d320b1..83f71c266c66 100644
--- a/drivers/scsi/ufs/ufshcd.c
+++ b/drivers/scsi/ufs/ufshcd.c
@@ -1677,8 +1677,9 @@ static void __ufshcd_release(struct ufs_hba *hba)
 
 	hba->clk_gating.state = REQ_CLKS_OFF;
 	trace_ufshcd_clk_gating(dev_name(hba->dev), hba->clk_gating.state);
-	schedule_delayed_work(&hba->clk_gating.gate_work,
-			msecs_to_jiffies(hba->clk_gating.delay_ms));
+	queue_delayed_work(hba->clk_gating.clk_gating_workq,
+			   &hba->clk_gating.gate_work,
+			   msecs_to_jiffies(hba->clk_gating.delay_ms));
 }
 
 void ufshcd_release(struct ufs_hba *hba)
-- 
2.17.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ