[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20230816170013.4262-1-dg573847474@gmail.com>
Date: Wed, 16 Aug 2023 17:00:13 +0000
From: Chengfeng Ye <dg573847474@...il.com>
To: vkoul@...nel.org, sugaya.taichi@...ionext.com,
orito.takao@...ionext.com, len.baker@....com,
jaswinder.singh@...aro.org
Cc: dmaengine@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, Chengfeng Ye <dg573847474@...il.com>
Subject: [PATCH v2] dmaengine: milbeaut-hdmac: Fix potential deadlock on &mc->vc.lock
As &mc->vc.lock is acquired by milbeaut_hdmac_interrupt() under irq
context, other acquisition of the same lock under process context should
disable irq, otherwise deadlock could happen if the irq preempts the
execution of process context code while the lock is held in process context
on the same CPU.
milbeaut_hdmac_chan_config(), milbeaut_hdmac_chan_resume() and
milbeaut_hdmac_chan_pause() are such callback functions not disable irq by
default.
Possible deadlock scenario:
milbeaut_hdmac_chan_config()
-> spin_lock(&mc->vc.lock)
<hard interruption>
-> milbeaut_hdmac_interrupt()
-> spin_lock(&mc->vc.lock); (deadlock here)
This flaw was found by an experimental static analysis tool I am developing
for irq-related deadlock.
The tentative patch fixes the potential deadlock by spin_lock_irqsave() in
the three callback functions to disable irq while lock is held.
Signed-off-by: Chengfeng Ye <dg573847474@...il.com>
---
Change in V2:
- Also change &mc->vc.lock to &vc->lock for uniformity consideration
---
drivers/dma/milbeaut-hdmac.c | 15 +++++++++------
1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/drivers/dma/milbeaut-hdmac.c b/drivers/dma/milbeaut-hdmac.c
index 1b0a95892627..5c664c8c10f5 100644
--- a/drivers/dma/milbeaut-hdmac.c
+++ b/drivers/dma/milbeaut-hdmac.c
@@ -214,10 +214,11 @@ milbeaut_hdmac_chan_config(struct dma_chan *chan, struct dma_slave_config *cfg)
{
struct virt_dma_chan *vc = to_virt_chan(chan);
struct milbeaut_hdmac_chan *mc = to_milbeaut_hdmac_chan(vc);
+ unsigned long flags;
- spin_lock(&mc->vc.lock);
+ spin_lock_irqsave(&vc->lock, flags);
mc->cfg = *cfg;
- spin_unlock(&mc->vc.lock);
+ spin_unlock_irqrestore(&vc->lock, flags);
return 0;
}
@@ -226,13 +227,14 @@ static int milbeaut_hdmac_chan_pause(struct dma_chan *chan)
{
struct virt_dma_chan *vc = to_virt_chan(chan);
struct milbeaut_hdmac_chan *mc = to_milbeaut_hdmac_chan(vc);
+ unsigned long flags;
u32 val;
- spin_lock(&mc->vc.lock);
+ spin_lock_irqsave(&vc->lock, flags);
val = readl_relaxed(mc->reg_ch_base + MLB_HDMAC_DMACA);
val |= MLB_HDMAC_PB;
writel_relaxed(val, mc->reg_ch_base + MLB_HDMAC_DMACA);
- spin_unlock(&mc->vc.lock);
+ spin_unlock_irqrestore(&vc->lock, flags);
return 0;
}
@@ -241,13 +243,14 @@ static int milbeaut_hdmac_chan_resume(struct dma_chan *chan)
{
struct virt_dma_chan *vc = to_virt_chan(chan);
struct milbeaut_hdmac_chan *mc = to_milbeaut_hdmac_chan(vc);
+ unsigned long flags;
u32 val;
- spin_lock(&mc->vc.lock);
+ spin_lock_irqsave(&vc->lock, flags);
val = readl_relaxed(mc->reg_ch_base + MLB_HDMAC_DMACA);
val &= ~MLB_HDMAC_PB;
writel_relaxed(val, mc->reg_ch_base + MLB_HDMAC_DMACA);
- spin_unlock(&mc->vc.lock);
+ spin_unlock_irqrestore(&vc->lock, flags);
return 0;
}
--
2.17.1
Powered by blists - more mailing lists