[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMZdPi9uvh4E70-AXpGrdzkgh35mfWQbhL8Kxw_o9_DsfL2gbw@mail.gmail.com>
Date: Wed, 18 May 2022 11:13:56 +0200
From: Loic Poulain <loic.poulain@...aro.org>
To: Yang Yingliang <yangyingliang@...wei.com>
Cc: linux-kernel@...r.kernel.org, linux-mediatek@...ts.infradead.org,
linux-arm-kernel@...ts.infradead.org, netdev@...r.kernel.org,
haijun.liu@...iatek.com, chandrashekar.devegowda@...el.com,
ricardo.martinez@...ux.intel.com, davem@...emloft.net,
kuba@...nel.org
Subject: Re: [PATCH -next] net: wwan: t7xx: use GFP_ATOMIC under spin lock in t7xx_cldma_gpd_set_next_ptr()
Hi Yang,
On Wed, 18 May 2022 at 10:57, Yang Yingliang <yangyingliang@...wei.com> wrote:
>
> Sometimes t7xx_cldma_gpd_set_next_ptr() is called under spin lock,
> so add a parameter in t7xx_cldma_gpd_set_next_ptr() to make if it
> use GFP_ATOMIC flag.
>
> Fixes: 39d439047f1d ("net: wwan: t7xx: Add control DMA interface")
> Reported-by: Hulk Robot <hulkci@...wei.com>
> Signed-off-by: Yang Yingliang <yangyingliang@...wei.com>
> ---
> drivers/net/wwan/t7xx/t7xx_hif_cldma.c | 13 ++++++++-----
> 1 file changed, 8 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
> index 0c52801ed0de..1fa9bb763831 100644
> --- a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
> +++ b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
> @@ -91,9 +91,12 @@ static void t7xx_cldma_gpd_set_next_ptr(struct cldma_gpd *gpd, dma_addr_t next_p
> }
>
> static int t7xx_cldma_alloc_and_map_skb(struct cldma_ctrl *md_ctrl, struct cldma_request *req,
> - size_t size)
> + size_t size, bool is_atomic)
Would be simpler to directly pass the gfp_mask as a parameter.
> {
> - req->skb = __dev_alloc_skb(size, GFP_KERNEL);
> + if (is_atomic)
> + req->skb = __dev_alloc_skb(size, GFP_ATOMIC);
> + else
> + req->skb = __dev_alloc_skb(size, GFP_KERNEL);
> if (!req->skb)
> return -ENOMEM;
>
> @@ -174,7 +177,7 @@ static int t7xx_cldma_gpd_rx_from_q(struct cldma_queue *queue, int budget, bool
> spin_unlock_irqrestore(&queue->ring_lock, flags);
> req = queue->rx_refill;
>
> - ret = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, queue->tr_ring->pkt_size);
> + ret = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, queue->tr_ring->pkt_size, false);
> if (ret)
> return ret;
>
> @@ -402,7 +405,7 @@ static struct cldma_request *t7xx_alloc_rx_request(struct cldma_ctrl *md_ctrl, s
> if (!req->gpd)
> goto err_free_req;
>
> - val = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, pkt_size);
> + val = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, pkt_size, false);
> if (val)
> goto err_free_pool;
>
> @@ -801,7 +804,7 @@ static int t7xx_cldma_clear_rxq(struct cldma_ctrl *md_ctrl, int qnum)
> if (req->skb)
> continue;
>
> - ret = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, rxq->tr_ring->pkt_size);
> + ret = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, rxq->tr_ring->pkt_size, true);
> if (ret)
> break;
>
> --
> 2.25.1
>
Powered by blists - more mailing lists