lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 17 May 2022 14:48:21 +0800
From:   Ziyang Xuan <william.xuanziyang@...wei.com>
To:     <chandrashekar.devegowda@...el.com>, <linuxwwan@...el.com>,
        <chiranjeevi.rapolu@...ux.intel.com>, <haijun.liu@...iatek.com>,
        <m.chetan.kumar@...ux.intel.com>,
        <ricardo.martinez@...ux.intel.com>, <loic.poulain@...aro.org>,
        <ryazanov.s.a@...il.com>, <johannes@...solutions.net>,
        <davem@...emloft.net>, <edumazet@...gle.com>, <kuba@...nel.org>,
        <pabeni@...hat.com>, <netdev@...r.kernel.org>
Subject: [PATCH net-next v2] net: wwan: t7xx: fix GFP_KERNEL usage in spin_lock context

t7xx_cldma_clear_rxq() call t7xx_cldma_alloc_and_map_skb() in spin_lock
context, But __dev_alloc_skb() in t7xx_cldma_alloc_and_map_skb() uses
GFP_KERNEL, that will introduce scheduling factor in spin_lock context.

Because t7xx_cldma_clear_rxq() is called after stopping CLDMA, so we can
remove the spin_lock from t7xx_cldma_clear_rxq().

Fixes: 39d439047f1d ("net: wwan: t7xx: Add control DMA interface")
Signed-off-by: Ziyang Xuan <william.xuanziyang@...wei.com>
---
 drivers/net/wwan/t7xx/t7xx_hif_cldma.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
index 46066dcd2607..7493285a9606 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
+++ b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
@@ -782,10 +782,12 @@ static int t7xx_cldma_clear_rxq(struct cldma_ctrl *md_ctrl, int qnum)
 	struct cldma_queue *rxq = &md_ctrl->rxq[qnum];
 	struct cldma_request *req;
 	struct cldma_gpd *gpd;
-	unsigned long flags;
 	int ret = 0;
 
-	spin_lock_irqsave(&rxq->ring_lock, flags);
+	/* CLDMA has been stopped. There is not any CLDMA IRQ, holding
+	 * ring_lock is not needed. Thus we can use functions that may
+	 * introduce scheduling.
+	 */
 	t7xx_cldma_q_reset(rxq);
 	list_for_each_entry(req, &rxq->tr_ring->gpd_ring, entry) {
 		gpd = req->gpd;
@@ -808,7 +810,6 @@ static int t7xx_cldma_clear_rxq(struct cldma_ctrl *md_ctrl, int qnum)
 
 		t7xx_cldma_gpd_set_data_ptr(req->gpd, req->mapped_buff);
 	}
-	spin_unlock_irqrestore(&rxq->ring_lock, flags);
 
 	return ret;
 }
-- 
2.25.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ