[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230628045925.5261-1-dg573847474@gmail.com>
Date: Wed, 28 Jun 2023 04:59:25 +0000
From: Chengfeng Ye <dg573847474@...il.com>
To: dennis.dalessandro@...nelisnetworks.com, jgg@...pe.ca,
leon@...nel.org
Cc: linux-rdma@...r.kernel.org, linux-kernel@...r.kernel.org,
Chengfeng Ye <dg573847474@...il.com>
Subject: [PATCH] IB/hfi1: Fix potential deadlock on &sde->flushlist_lock
As &sde->flushlist_lock is acquired by timer sdma_err_progress_check()
through layer of calls under softirq context, other process
context code acquiring the lock should disable irq.
Possible deadlock scenario
sdma_send_txreq()
-> spin_lock(&sde->flushlist_lock)
<timer interrupt>
-> sdma_err_progress_check()
-> __sdma_process_event()
-> sdma_set_state()
-> sdma_flush()
-> spin_lock_irqsave(&sde->flushlist_lock, flags) (deadlock here)
This flaw was found using an experimental static analysis tool we are
developing for irq-related deadlock.
The tentative patch fix the potential deadlock by spin_lock_irqsave().
Signed-off-by: Chengfeng Ye <dg573847474@...il.com>
---
drivers/infiniband/hw/hfi1/sdma.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
index bb2552dd29c1..0431f575c861 100644
--- a/drivers/infiniband/hw/hfi1/sdma.c
+++ b/drivers/infiniband/hw/hfi1/sdma.c
@@ -2371,9 +2371,9 @@ int sdma_send_txreq(struct sdma_engine *sde,
tx->sn = sde->tail_sn++;
trace_hfi1_sdma_in_sn(sde, tx->sn);
#endif
- spin_lock(&sde->flushlist_lock);
+ spin_lock_irqsave(&sde->flushlist_lock, flags);
list_add_tail(&tx->list, &sde->flushlist);
- spin_unlock(&sde->flushlist_lock);
+ spin_unlock_irqrestore(&sde->flushlist_lock, flags);
iowait_inc_wait_count(wait, tx->num_desc);
queue_work_on(sde->cpu, system_highpri_wq, &sde->flush_worker);
ret = -ECOMM;
@@ -2459,7 +2459,7 @@ int sdma_send_txlist(struct sdma_engine *sde, struct iowait_work *wait,
*count_out = total_count;
return ret;
unlock_noconn:
- spin_lock(&sde->flushlist_lock);
+ spin_lock_irqsave(&sde->flushlist_lock, flags);
list_for_each_entry_safe(tx, tx_next, tx_list, list) {
tx->wait = iowait_ioww_to_iow(wait);
list_del_init(&tx->list);
@@ -2472,7 +2472,7 @@ int sdma_send_txlist(struct sdma_engine *sde, struct iowait_work *wait,
flush_count++;
iowait_inc_wait_count(wait, tx->num_desc);
}
- spin_unlock(&sde->flushlist_lock);
+ spin_unlock_irqrestore(&sde->flushlist_lock, flags);
queue_work_on(sde->cpu, system_highpri_wq, &sde->flush_worker);
ret = -ECOMM;
goto update_tail;
--
2.17.1
Powered by blists - more mailing lists