[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230417025203.18097-3-chunfeng.yun@mediatek.com>
Date: Mon, 17 Apr 2023 10:51:59 +0800
From: Chunfeng Yun <chunfeng.yun@...iatek.com>
To: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Rob Herring <robh+dt@...nel.org>
CC: Chunfeng Yun <chunfeng.yun@...iatek.com>,
Krzysztof Kozlowski <krzysztof.kozlowski+dt@...aro.org>,
Matthias Brugger <matthias.bgg@...il.com>,
AngeloGioacchino Del Regno
<angelogioacchino.delregno@...labora.com>,
<linux-usb@...r.kernel.org>,
<linux-arm-kernel@...ts.infradead.org>,
<linux-mediatek@...ts.infradead.org>, <devicetree@...r.kernel.org>,
<linux-kernel@...r.kernel.org>,
Tianping Fang <tianping.fang@...iatek.com>
Subject: [PATCH v3 3/7] usb: mtu3: fix kernel panic at qmu transfer done irq handler
When handle qmu transfer irq, it will unlock @mtu->lock before give back
request, if another thread handle disconnect event at the same time, and
try to disable ep, it may lock @mtu->lock and free qmu ring, then qmu
irq hanlder may get a NULL gpd, avoid the KE by checking gpd's value before
handling it.
e.g.
qmu done irq on cpu0 thread running on cpu1
qmu_done_tx()
handle gpd [0]
mtu3_requ_complete() mtu3_gadget_ep_disable()
unlock @mtu->lock
give back request lock @mtu->lock
mtu3_ep_disable()
mtu3_gpd_ring_free()
unlock @mtu->lock
lock @mtu->lock
get next gpd [1]
[1]: goto [0] to handle next gpd, and next gpd may be NULL.
Fixes: 48e0d3735aa5 ("usb: mtu3: supports new QMU format")
Signed-off-by: Chunfeng Yun <chunfeng.yun@...iatek.com>
---
v3: add Fixes tag suggested by AngeloGioacchino
v2: fix typo suggested by AngeloGioacchino
---
drivers/usb/mtu3/mtu3_qmu.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/usb/mtu3/mtu3_qmu.c b/drivers/usb/mtu3/mtu3_qmu.c
index 6be4977a5db5..3d77408e3133 100644
--- a/drivers/usb/mtu3/mtu3_qmu.c
+++ b/drivers/usb/mtu3/mtu3_qmu.c
@@ -210,6 +210,7 @@ static struct qmu_gpd *advance_enq_gpd(struct mtu3_gpd_ring *ring)
return ring->enqueue;
}
+/* @dequeue may be NULL if ring is unallocated or freed */
static struct qmu_gpd *advance_deq_gpd(struct mtu3_gpd_ring *ring)
{
if (ring->dequeue < ring->end)
@@ -522,7 +523,7 @@ static void qmu_done_tx(struct mtu3 *mtu, u8 epnum)
dev_dbg(mtu->dev, "%s EP%d, last=%p, current=%p, enq=%p\n",
__func__, epnum, gpd, gpd_current, ring->enqueue);
- while (gpd != gpd_current && !GET_GPD_HWO(gpd)) {
+ while (gpd && gpd != gpd_current && !GET_GPD_HWO(gpd)) {
mreq = next_request(mep);
@@ -561,7 +562,7 @@ static void qmu_done_rx(struct mtu3 *mtu, u8 epnum)
dev_dbg(mtu->dev, "%s EP%d, last=%p, current=%p, enq=%p\n",
__func__, epnum, gpd, gpd_current, ring->enqueue);
- while (gpd != gpd_current && !GET_GPD_HWO(gpd)) {
+ while (gpd && gpd != gpd_current && !GET_GPD_HWO(gpd)) {
mreq = next_request(mep);
--
2.18.0
Powered by blists - more mailing lists