[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220727161024.676498293@linuxfoundation.org>
Date: Wed, 27 Jul 2022 18:12:23 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org,
Przemyslaw Patynowski <przemyslawx.patynowski@...el.com>,
Jesse Brandeburg <jesse.brandeburg@...el.com>,
Konrad Jankowski <konrad0.jankowski@...el.com>,
Tony Nguyen <anthony.l.nguyen@...el.com>,
Sasha Levin <sashal@...nel.org>
Subject: [PATCH 5.18 079/158] iavf: Fix handling of dummy receive descriptors
From: Przemyslaw Patynowski <przemyslawx.patynowski@...el.com>
[ Upstream commit a9f49e0060301a9bfebeca76739158d0cf91cdf6 ]
Fix memory leak caused by not handling dummy receive descriptor properly.
iavf_get_rx_buffer now sets the rx_buffer return value for dummy receive
descriptors. Without this patch, when the hardware writes a dummy
descriptor, iavf would not free the page allocated for the previous receive
buffer. This is an unlikely event but can still happen.
[Jesse: massaged commit message]
Fixes: efa14c398582 ("iavf: allow null RX descriptors")
Signed-off-by: Przemyslaw Patynowski <przemyslawx.patynowski@...el.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@...el.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@...el.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@...el.com>
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
drivers/net/ethernet/intel/iavf/iavf_txrx.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
index 7bf8c25dc824..06d18797d25a 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
@@ -1285,11 +1285,10 @@ static struct iavf_rx_buffer *iavf_get_rx_buffer(struct iavf_ring *rx_ring,
{
struct iavf_rx_buffer *rx_buffer;
- if (!size)
- return NULL;
-
rx_buffer = &rx_ring->rx_bi[rx_ring->next_to_clean];
prefetchw(rx_buffer->page);
+ if (!size)
+ return rx_buffer;
/* we are reusing so sync this buffer for CPU use */
dma_sync_single_range_for_cpu(rx_ring->dev,
--
2.35.1
Powered by blists - more mailing lists