[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220727161031.924437707@linuxfoundation.org>
Date: Wed, 27 Jul 2022 18:10:05 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org,
Przemyslaw Patynowski <przemyslawx.patynowski@...el.com>,
Jesse Brandeburg <jesse.brandeburg@...el.com>,
Konrad Jankowski <konrad0.jankowski@...el.com>,
Tony Nguyen <anthony.l.nguyen@...el.com>,
Sasha Levin <sashal@...nel.org>
Subject: [PATCH 5.15 101/201] iavf: Fix handling of dummy receive descriptors
From: Przemyslaw Patynowski <przemyslawx.patynowski@...el.com>
[ Upstream commit a9f49e0060301a9bfebeca76739158d0cf91cdf6 ]
Fix memory leak caused by not handling dummy receive descriptor properly.
iavf_get_rx_buffer now sets the rx_buffer return value for dummy receive
descriptors. Without this patch, when the hardware writes a dummy
descriptor, iavf would not free the page allocated for the previous receive
buffer. This is an unlikely event but can still happen.
[Jesse: massaged commit message]
Fixes: efa14c398582 ("iavf: allow null RX descriptors")
Signed-off-by: Przemyslaw Patynowski <przemyslawx.patynowski@...el.com>
Signed-off-by: Jesse Brandeburg <jesse.brandeburg@...el.com>
Tested-by: Konrad Jankowski <konrad0.jankowski@...el.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@...el.com>
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
drivers/net/ethernet/intel/iavf/iavf_txrx.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
index 3525eab8e9f9..5448ed0e0357 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
@@ -1250,11 +1250,10 @@ static struct iavf_rx_buffer *iavf_get_rx_buffer(struct iavf_ring *rx_ring,
{
struct iavf_rx_buffer *rx_buffer;
- if (!size)
- return NULL;
-
rx_buffer = &rx_ring->rx_bi[rx_ring->next_to_clean];
prefetchw(rx_buffer->page);
+ if (!size)
+ return rx_buffer;
/* we are reusing so sync this buffer for CPU use */
dma_sync_single_range_for_cpu(rx_ring->dev,
--
2.35.1
Powered by blists - more mailing lists