[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20230424034039.20529-2-alice.chao@mediatek.com>
Date: Mon, 24 Apr 2023 11:40:35 +0800
From: Alice Chao <alice.chao@...iatek.com>
To: Alim Akhtar <alim.akhtar@...sung.com>,
Avri Altman <avri.altman@....com>,
Bart Van Assche <bvanassche@....org>,
"James E.J. Bottomley" <jejb@...ux.ibm.com>,
"Martin K. Petersen" <martin.petersen@...cle.com>,
Matthias Brugger <matthias.bgg@...il.com>,
AngeloGioacchino Del Regno
<angelogioacchino.delregno@...labora.com>,
Manivannan Sadhasivam <mani@...nel.org>,
Can Guo <quic_cang@...cinc.com>,
Stanley Chu <stanley.chu@...iatek.com>
CC: <peter.wang@...iatek.com>, <chun-hung.wu@...iatek.com>,
<alice.chao@...iatek.com>, <powen.kao@...iatek.com>,
<naomi.chu@...iatek.com>, <cc.chou@...iatek.com>,
<chaotian.jing@...iatek.com>, <jiajie.hao@...iatek.com>,
<tun-yu.yu@...iatek.com>, <eddie.huang@...iatek.com>,
<wsd_upstream@...iatek.com>,
Asutosh Das <quic_asutoshd@...cinc.com>,
<linux-scsi@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<linux-arm-kernel@...ts.infradead.org>,
<linux-mediatek@...ts.infradead.org>
Subject: [PATCH v2 1/1] scsi: ufs: core: Fix &hwq->cq_lock deadlock issue
[name:lockdep&]WARNING: inconsistent lock state
[name:lockdep&]--------------------------------
[name:lockdep&]inconsistent {IN-HARDIRQ-W} -> {HARDIRQ-ON-W} usage.
[name:lockdep&]kworker/u16:4/260 [HC0[0]:SC0[0]:HE1:SE1] takes:
ffffff8028444600 (&hwq->cq_lock){?.-.}-{2:2}, at:
ufshcd_mcq_poll_cqe_lock+0x30/0xe0
[name:lockdep&]{IN-HARDIRQ-W} state was registered at:
lock_acquire+0x17c/0x33c
_raw_spin_lock+0x5c/0x7c
ufshcd_mcq_poll_cqe_lock+0x30/0xe0
ufs_mtk_mcq_intr+0x60/0x1bc [ufs_mediatek_mod]
__handle_irq_event_percpu+0x140/0x3ec
handle_irq_event+0x50/0xd8
handle_fasteoi_irq+0x148/0x2b0
generic_handle_domain_irq+0x4c/0x6c
gic_handle_irq+0x58/0x134
call_on_irq_stack+0x40/0x74
do_interrupt_handler+0x84/0xe4
el1_interrupt+0x3c/0x78
<snip>
Possible unsafe locking scenario:
CPU0
----
lock(&hwq->cq_lock);
<Interrupt>
lock(&hwq->cq_lock);
*** DEADLOCK ***
2 locks held by kworker/u16:4/260:
[name:lockdep&]
stack backtrace:
CPU: 7 PID: 260 Comm: kworker/u16:4 Tainted: G S W OE
6.1.17-mainline-android14-2-g277223301adb #1
Workqueue: ufs_eh_wq_0 ufshcd_err_handler
Call trace:
dump_backtrace+0x10c/0x160
show_stack+0x20/0x30
dump_stack_lvl+0x98/0xd8
dump_stack+0x20/0x60
print_usage_bug+0x584/0x76c
mark_lock_irq+0x488/0x510
mark_lock+0x1ec/0x25c
__lock_acquire+0x4d8/0xffc
lock_acquire+0x17c/0x33c
_raw_spin_lock+0x5c/0x7c
ufshcd_mcq_poll_cqe_lock+0x30/0xe0
ufshcd_poll+0x68/0x1b0
ufshcd_transfer_req_compl+0x9c/0xc8
ufshcd_err_handler+0x3bc/0xea0
process_one_work+0x2f4/0x7e8
worker_thread+0x234/0x450
kthread+0x110/0x134
ret_from_fork+0x10/0x20
ufs_mtk_mcq_intr() could refer to
https://lore.kernel.org/all/20230328103423.10970-3-powen.kao@mediatek.com/
When ufshcd_err_handler() is executed, CQ event interrupt can enter
waiting for the same lock. It could happened in upstream code path
ufshcd_handle_mcq_cq_events() and also in ufs_mtk_mcq_intr(). This
warning message will be generated when &hwq->cq_lock is used in IRQ
context with IRQ enabled. Use ufshcd_mcq_poll_cqe_lock() with
spin_lock_irqsave instead of spin_lock to resolve the deadlock issue.
Fixes: ed975065c31c ("scsi: ufs: core: mcq: Add completion support in poll")
Signed-off-by: Alice Chao <alice.chao@...iatek.com>
Change-Id: Iaff190c061c8e1308b893bff059a8bb556e5b888
---
drivers/ufs/core/ufs-mcq.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/ufs/core/ufs-mcq.c b/drivers/ufs/core/ufs-mcq.c
index 31df052fbc41..202ff71e1b58 100644
--- a/drivers/ufs/core/ufs-mcq.c
+++ b/drivers/ufs/core/ufs-mcq.c
@@ -299,11 +299,11 @@ EXPORT_SYMBOL_GPL(ufshcd_mcq_poll_cqe_nolock);
unsigned long ufshcd_mcq_poll_cqe_lock(struct ufs_hba *hba,
struct ufs_hw_queue *hwq)
{
- unsigned long completed_reqs;
+ unsigned long completed_reqs, flags;
- spin_lock(&hwq->cq_lock);
+ spin_lock_irqsave(&hwq->cq_lock, flags);
completed_reqs = ufshcd_mcq_poll_cqe_nolock(hba, hwq);
- spin_unlock(&hwq->cq_lock);
+ spin_unlock_irqrestore(&hwq->cq_lock, flags);
return completed_reqs;
}
--
2.18.0
Powered by blists - more mailing lists