lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-Id: <36ec3b729542ea60898471d890796f745479ba32.1673342990.git.lorenzo@kernel.org> Date: Tue, 10 Jan 2023 10:31:26 +0100 From: Lorenzo Bianconi <lorenzo@...nel.org> To: netdev@...r.kernel.org Cc: davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org, pabeni@...hat.com, lorenzo.bianconi@...hat.com, nbd@....name, john@...ozen.org, sean.wang@...iatek.com, Mark-MC.Lee@...iatek.com, sujuan.chen@...iatek.com, daniel@...rotopia.org, alexanderduyck@...com Subject: [PATCH v2 net-next] net: ethernet: mtk_wed: get rid of queue lock for rx queue Queue spinlock is currently held in mtk_wed_wo_queue_rx_clean and mtk_wed_wo_queue_refill routines for MTK Wireless Ethernet Dispatcher MCU rx queue. mtk_wed_wo_queue_refill() is running during initialization and in rx tasklet while mtk_wed_wo_queue_rx_clean() is running in mtk_wed_wo_hw_deinit() during hw de-init phase after rx tasklet has been disabled. Since mtk_wed_wo_queue_rx_clean and mtk_wed_wo_queue_refill routines can't run concurrently get rid of spinlock for mcu rx queue. Reviewed-by: Alexander Duyck <alexanderduyck@...com> Signed-off-by: Lorenzo Bianconi <lorenzo@...nel.org> --- Changes since v1: - improve commit message --- drivers/net/ethernet/mediatek/mtk_wed_wo.c | 4 ---- 1 file changed, 4 deletions(-) diff --git a/drivers/net/ethernet/mediatek/mtk_wed_wo.c b/drivers/net/ethernet/mediatek/mtk_wed_wo.c index a0a39643caf7..d32b86499896 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_wo.c +++ b/drivers/net/ethernet/mediatek/mtk_wed_wo.c @@ -138,7 +138,6 @@ mtk_wed_wo_queue_refill(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q, enum dma_data_direction dir = rx ? DMA_FROM_DEVICE : DMA_TO_DEVICE; int n_buf = 0; - spin_lock_bh(&q->lock); while (q->queued < q->n_desc) { struct mtk_wed_wo_queue_entry *entry; dma_addr_t addr; @@ -172,7 +171,6 @@ mtk_wed_wo_queue_refill(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q, q->queued++; n_buf++; } - spin_unlock_bh(&q->lock); return n_buf; } @@ -316,7 +314,6 @@ mtk_wed_wo_queue_rx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q) { struct page *page; - spin_lock_bh(&q->lock); for (;;) { void *buf = mtk_wed_wo_dequeue(wo, q, NULL, true); @@ -325,7 +322,6 @@ mtk_wed_wo_queue_rx_clean(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q) skb_free_frag(buf); } - spin_unlock_bh(&q->lock); if (!q->cache.va) return; -- 2.39.0
Powered by blists - more mailing lists