[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <cf656975-69b4-427e-8769-d16575774bba@redhat.com>
Date: Thu, 17 Oct 2024 10:56:35 +0200
From: Paolo Abeni <pabeni@...hat.com>
To: Aleksandr Mishin <amishin@...rgos.ru>,
Veerasenareddy Burru <vburru@...vell.com>,
Abhijit Ayarekar <aayarekar@...vell.com>,
Satananda Burla <sburla@...vell.com>, Sathesh Edara <sedara@...vell.com>
Cc: "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
lvc-project@...uxtesting.org, Simon Horman <horms@...nel.org>
Subject: Re: [PATCH net v4 1/2] octeon_ep: Implement helper for iterating
packets in Rx queue
On 10/12/24 11:49, Aleksandr Mishin wrote:
> diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_rx.c b/drivers/net/ethernet/marvell/octeon_ep/octep_rx.c
> index 4746a6b258f0..62db101b2147 100644
> --- a/drivers/net/ethernet/marvell/octeon_ep/octep_rx.c
> +++ b/drivers/net/ethernet/marvell/octeon_ep/octep_rx.c
> @@ -336,6 +336,30 @@ static int octep_oq_check_hw_for_pkts(struct octep_device *oct,
> return new_pkts;
> }
>
> +/**
> + * octep_oq_next_pkt() - Move to the next packet in Rx queue.
> + *
> + * @oq: Octeon Rx queue data structure.
> + * @buff_info: Current packet buffer info.
> + * @read_idx: Current packet index in the ring.
> + * @desc_used: Current packet descriptor number.
> + *
> + * Free the resources associated with a packet.
> + * Increment packet index in the ring and packet descriptor number.
> + */
> +static void octep_oq_next_pkt(struct octep_oq *oq,
> + struct octep_rx_buffer *buff_info,
> + u32 *read_idx, u32 *desc_used)
> +{
> + dma_unmap_page(oq->dev, oq->desc_ring[*read_idx].buffer_ptr,
> + PAGE_SIZE, DMA_FROM_DEVICE);
> + buff_info->page = NULL;
> + (*read_idx)++;
> + (*desc_used)++;
> + if (*read_idx == oq->max_count)
> + *read_idx = 0;
> +}
> +
> /**
> * __octep_oq_process_rx() - Process hardware Rx queue and push to stack.
> *
> @@ -367,10 +391,7 @@ static int __octep_oq_process_rx(struct octep_device *oct,
> desc_used = 0;
> for (pkt = 0; pkt < pkts_to_process; pkt++) {
> buff_info = (struct octep_rx_buffer *)&oq->buff_info[read_idx];
> - dma_unmap_page(oq->dev, oq->desc_ring[read_idx].buffer_ptr,
> - PAGE_SIZE, DMA_FROM_DEVICE);
> resp_hw = page_address(buff_info->page);
> - buff_info->page = NULL;
>
> /* Swap the length field that is in Big-Endian to CPU */
> buff_info->len = be64_to_cpu(resp_hw->length);
> @@ -394,36 +415,27 @@ static int __octep_oq_process_rx(struct octep_device *oct,
> data_offset = OCTEP_OQ_RESP_HW_SIZE;
> rx_ol_flags = 0;
> }
> +
> + skb = build_skb((void *)resp_hw, PAGE_SIZE);
> + skb_reserve(skb, data_offset);
> +
> + octep_oq_next_pkt(oq, buff_info, &read_idx, &desc_used);
I'm sorry for not catching the following in the previous iteration (the
split indeed helped with the review):
build_skb() will write into the paged buffer, I think you should unmap
it with octep_oq_next_pkt() before the skb creation.
That in turn will have side effect on the following patch (the 'do {}
while' loop should become a plain 'while' one).
Thanks,
Paolo
Powered by blists - more mailing lists