[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241017100651.15863-2-amishin@t-argos.ru>
Date: Thu, 17 Oct 2024 13:06:50 +0300
From: Aleksandr Mishin <amishin@...rgos.ru>
To: Veerasenareddy Burru <vburru@...vell.com>, Abhijit Ayarekar
<aayarekar@...vell.com>, Satananda Burla <sburla@...vell.com>, Sathesh Edara
<sedara@...vell.com>
CC: Aleksandr Mishin <amishin@...rgos.ru>, "David S. Miller"
<davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski
<kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, <netdev@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <lvc-project@...uxtesting.org>, Simon Horman
<horms@...nel.org>
Subject: [PATCH net v5 1/2] octeon_ep: Implement helper for iterating packets in Rx queue
The common code with some packet and index manipulations is extracted and
moved to newly implemented helper to make the code more readable and avoid
duplication. This is a preparation for skb allocation failure handling.
Found by Linux Verification Center (linuxtesting.org) with SVACE.
Suggested-by: Simon Horman <horms@...nel.org>
Suggested-by: Paolo Abeni <pabeni@...hat.com>
Signed-off-by: Aleksandr Mishin <amishin@...rgos.ru>
---
Compile tested only.
v5:
- Unmap paged buffer before the skb creation as suggested by Paolo
(https://lore.kernel.org/all/cf656975-69b4-427e-8769-d16575774bba@redhat.com/)
v4: https://lore.kernel.org/all/20241012094950.9438-1-amishin@t-argos.ru/
- Split patch up as suggested by Jakub
(https://lore.kernel.org/all/20241004073311.223efca4@kernel.org/)
v3: https://lore.kernel.org/all/20240930053328.9618-1-amishin@t-argos.ru/
- Implement helper which frees current packet resources and increase
index and descriptor as suggested by Simon
(https://lore.kernel.org/all/20240919134812.GB1571683@kernel.org/)
- v3 has been reviewed-by Simon Horman
(https://lore.kernel.org/all/20240930162622.GF1310185@kernel.org/)
v1: https://lore.kernel.org/all/20240906063907.9591-1-amishin@t-argos.ru/
.../net/ethernet/marvell/octeon_ep/octep_rx.c | 55 +++++++++++--------
1 file changed, 32 insertions(+), 23 deletions(-)
diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_rx.c b/drivers/net/ethernet/marvell/octeon_ep/octep_rx.c
index 4746a6b258f0..a889c1510518 100644
--- a/drivers/net/ethernet/marvell/octeon_ep/octep_rx.c
+++ b/drivers/net/ethernet/marvell/octeon_ep/octep_rx.c
@@ -336,6 +336,30 @@ static int octep_oq_check_hw_for_pkts(struct octep_device *oct,
return new_pkts;
}
+/**
+ * octep_oq_next_pkt() - Move to the next packet in Rx queue.
+ *
+ * @oq: Octeon Rx queue data structure.
+ * @buff_info: Current packet buffer info.
+ * @read_idx: Current packet index in the ring.
+ * @desc_used: Current packet descriptor number.
+ *
+ * Free the resources associated with a packet.
+ * Increment packet index in the ring and packet descriptor number.
+ */
+static void octep_oq_next_pkt(struct octep_oq *oq,
+ struct octep_rx_buffer *buff_info,
+ u32 *read_idx, u32 *desc_used)
+{
+ dma_unmap_page(oq->dev, oq->desc_ring[*read_idx].buffer_ptr,
+ PAGE_SIZE, DMA_FROM_DEVICE);
+ buff_info->page = NULL;
+ (*read_idx)++;
+ (*desc_used)++;
+ if (*read_idx == oq->max_count)
+ *read_idx = 0;
+}
+
/**
* __octep_oq_process_rx() - Process hardware Rx queue and push to stack.
*
@@ -367,10 +391,7 @@ static int __octep_oq_process_rx(struct octep_device *oct,
desc_used = 0;
for (pkt = 0; pkt < pkts_to_process; pkt++) {
buff_info = (struct octep_rx_buffer *)&oq->buff_info[read_idx];
- dma_unmap_page(oq->dev, oq->desc_ring[read_idx].buffer_ptr,
- PAGE_SIZE, DMA_FROM_DEVICE);
resp_hw = page_address(buff_info->page);
- buff_info->page = NULL;
/* Swap the length field that is in Big-Endian to CPU */
buff_info->len = be64_to_cpu(resp_hw->length);
@@ -394,36 +415,27 @@ static int __octep_oq_process_rx(struct octep_device *oct,
data_offset = OCTEP_OQ_RESP_HW_SIZE;
rx_ol_flags = 0;
}
+
+ octep_oq_next_pkt(oq, buff_info, &read_idx, &desc_used);
+
+ skb = build_skb((void *)resp_hw, PAGE_SIZE);
+ skb_reserve(skb, data_offset);
+
rx_bytes += buff_info->len;
if (buff_info->len <= oq->max_single_buffer_size) {
- skb = build_skb((void *)resp_hw, PAGE_SIZE);
- skb_reserve(skb, data_offset);
skb_put(skb, buff_info->len);
- read_idx++;
- desc_used++;
- if (read_idx == oq->max_count)
- read_idx = 0;
} else {
struct skb_shared_info *shinfo;
u16 data_len;
- skb = build_skb((void *)resp_hw, PAGE_SIZE);
- skb_reserve(skb, data_offset);
/* Head fragment includes response header(s);
* subsequent fragments contains only data.
*/
skb_put(skb, oq->max_single_buffer_size);
- read_idx++;
- desc_used++;
- if (read_idx == oq->max_count)
- read_idx = 0;
-
shinfo = skb_shinfo(skb);
data_len = buff_info->len - oq->max_single_buffer_size;
while (data_len) {
- dma_unmap_page(oq->dev, oq->desc_ring[read_idx].buffer_ptr,
- PAGE_SIZE, DMA_FROM_DEVICE);
buff_info = (struct octep_rx_buffer *)
&oq->buff_info[read_idx];
if (data_len < oq->buffer_size) {
@@ -438,11 +450,8 @@ static int __octep_oq_process_rx(struct octep_device *oct,
buff_info->page, 0,
buff_info->len,
buff_info->len);
- buff_info->page = NULL;
- read_idx++;
- desc_used++;
- if (read_idx == oq->max_count)
- read_idx = 0;
+
+ octep_oq_next_pkt(oq, buff_info, &read_idx, &desc_used);
}
}
--
2.30.2
Powered by blists - more mailing lists