[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <08bd63d5de4ea8814ddd58c51ca6d1c17d0990e6.camel@gmail.com>
Date: Tue, 13 Dec 2022 08:09:15 -0800
From: Alexander H Duyck <alexander.duyck@...il.com>
To: Tirthendu Sarkar <tirthendu.sarkar@...el.com>, tirtha@...il.com,
jesse.brandeburg@...el.com, anthony.l.nguyen@...el.com,
davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org,
pabeni@...hat.com, ast@...nel.org, daniel@...earbox.net,
hawk@...nel.org, john.fastabend@...il.com,
intel-wired-lan@...ts.osuosl.org
Cc: netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
bpf@...r.kernel.org, magnus.karlsson@...el.com,
maciej.fijalkowski@...el.com
Subject: Re: [PATCH intel-next 4/5] i40e: pull out rx buffer allocation to
end of i40e_clean_rx_irq()
On Tue, 2022-12-13 at 16:20 +0530, Tirthendu Sarkar wrote:
> Previously i40e_alloc_rx_buffers() was called for every 32 cleaned
> buffers. For multi-buffers this may not be optimal as there may be more
> cleaned buffers in each i40e_clean_rx_irq() call. So this is now pulled
> out of the loop and moved to the end of i40e_clean_rx_irq().
>
> As a consequence instead of counting the number of buffers to be cleaned,
> I40E_DESC_UNUSED() can be used to call i40e_alloc_rx_buffers().
>
> Signed-off-by: Tirthendu Sarkar <tirthendu.sarkar@...el.com>
I suspect this will lead to performance issues on systems configured
with smaller ring sizes. Specifically with this change you are limiting
things to only allocating every 64 (NAPI_POLL_WEIGHT/budget) packets.
> ---
> drivers/net/ethernet/intel/i40e/i40e_txrx.c | 12 ++----------
> 1 file changed, 2 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> index e01bcc91a196..dc9dc0acdd37 100644
> --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> @@ -2425,7 +2425,6 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget,
> unsigned int *rx_cleaned)
> {
> unsigned int total_rx_bytes = 0, total_rx_packets = 0, frame_sz = 0;
> - u16 cleaned_count = I40E_DESC_UNUSED(rx_ring);
> unsigned int offset = rx_ring->rx_offset;
> struct sk_buff *skb = rx_ring->skb;
> u16 ntp = rx_ring->next_to_process;
> @@ -2450,13 +2449,6 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget,
> unsigned int size;
> u64 qword;
>
> - /* return some buffers to hardware, one at a time is too slow */
> - if (cleaned_count >= I40E_RX_BUFFER_WRITE) {
> - failure = failure ||
> - i40e_alloc_rx_buffers(rx_ring, cleaned_count);
> - cleaned_count = 0;
> - }
> -
> rx_desc = I40E_RX_DESC(rx_ring, ntp);
>
> /* status_error_len will always be zero for unused descriptors
> @@ -2479,7 +2471,6 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget,
> rx_buffer = i40e_rx_bi(rx_ring, ntp);
> I40E_INC_NEXT(ntp, ntc, rmax);
> i40e_reuse_rx_page(rx_ring, rx_buffer);
> - cleaned_count++;
> continue;
> }
>
> @@ -2531,7 +2522,6 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget,
> }
>
> i40e_put_rx_buffer(rx_ring, rx_buffer);
> - cleaned_count++;
>
> I40E_INC_NEXT(ntp, ntc, rmax);
> if (i40e_is_non_eop(rx_ring, rx_desc))
> @@ -2558,6 +2548,8 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget,
> rx_ring->next_to_process = ntp;
> rx_ring->next_to_clean = ntc;
>
> + failure = i40e_alloc_rx_buffers(rx_ring, I40E_DESC_UNUSED(rx_ring));
> +
> i40e_finalize_xdp_rx(rx_ring, xdp_xmit);
> rx_ring->skb = skb;
I am not a fan of this "failure" approach either. I hadn't noticed it
before but it is problematic. It would make much more sense to take an
approach similar to what we did for Tx where we kick the ring
periodically if it looks like it is stuck, in this case empty.
The problem is if you have memory allocation issues the last thing you
probably need is a NIC deciding to become memory hungry itself and
sticking in an allocation loop.
Powered by blists - more mailing lists