[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220518090738.2694556-1-yangyingliang@huawei.com>
Date: Wed, 18 May 2022 17:07:38 +0800
From: Yang Yingliang <yangyingliang@...wei.com>
To: <linux-kernel@...r.kernel.org>,
<linux-mediatek@...ts.infradead.org>,
<linux-arm-kernel@...ts.infradead.org>, <netdev@...r.kernel.org>
CC: <haijun.liu@...iatek.com>, <chandrashekar.devegowda@...el.com>,
<ricardo.martinez@...ux.intel.com>, <loic.poulain@...aro.org>,
<davem@...emloft.net>, <kuba@...nel.org>
Subject: [PATCH -next] net: wwan: t7xx: use GFP_ATOMIC under spin lock in t7xx_cldma_gpd_set_next_ptr()
Sometimes t7xx_cldma_gpd_set_next_ptr() is called under spin lock,
so add a parameter in t7xx_cldma_gpd_set_next_ptr() to make if it
use GFP_ATOMIC flag.
Fixes: 39d439047f1d ("net: wwan: t7xx: Add control DMA interface")
Reported-by: Hulk Robot <hulkci@...wei.com>
Signed-off-by: Yang Yingliang <yangyingliang@...wei.com>
---
drivers/net/wwan/t7xx/t7xx_hif_cldma.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
index 0c52801ed0de..1fa9bb763831 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
+++ b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
@@ -91,9 +91,12 @@ static void t7xx_cldma_gpd_set_next_ptr(struct cldma_gpd *gpd, dma_addr_t next_p
}
static int t7xx_cldma_alloc_and_map_skb(struct cldma_ctrl *md_ctrl, struct cldma_request *req,
- size_t size)
+ size_t size, bool is_atomic)
{
- req->skb = __dev_alloc_skb(size, GFP_KERNEL);
+ if (is_atomic)
+ req->skb = __dev_alloc_skb(size, GFP_ATOMIC);
+ else
+ req->skb = __dev_alloc_skb(size, GFP_KERNEL);
if (!req->skb)
return -ENOMEM;
@@ -174,7 +177,7 @@ static int t7xx_cldma_gpd_rx_from_q(struct cldma_queue *queue, int budget, bool
spin_unlock_irqrestore(&queue->ring_lock, flags);
req = queue->rx_refill;
- ret = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, queue->tr_ring->pkt_size);
+ ret = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, queue->tr_ring->pkt_size, false);
if (ret)
return ret;
@@ -402,7 +405,7 @@ static struct cldma_request *t7xx_alloc_rx_request(struct cldma_ctrl *md_ctrl, s
if (!req->gpd)
goto err_free_req;
- val = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, pkt_size);
+ val = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, pkt_size, false);
if (val)
goto err_free_pool;
@@ -801,7 +804,7 @@ static int t7xx_cldma_clear_rxq(struct cldma_ctrl *md_ctrl, int qnum)
if (req->skb)
continue;
- ret = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, rxq->tr_ring->pkt_size);
+ ret = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, rxq->tr_ring->pkt_size, true);
if (ret)
break;
--
2.25.1
Powered by blists - more mailing lists