[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJ8uoz0D3SfkJ8vW4d=uurLBBW33oye2+mzYJNXmQoPyM_jVfA@mail.gmail.com>
Date: Thu, 5 Sep 2024 14:49:18 +0200
From: Magnus Karlsson <magnus.karlsson@...il.com>
To: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
Cc: bpf@...r.kernel.org, ast@...nel.org, daniel@...earbox.net,
andrii@...nel.org, netdev@...r.kernel.org, magnus.karlsson@...el.com,
bjorn@...nel.org
Subject: Re: [PATCH bpf-next] xsk: bump xsk_queue::queue_empty_descs in xp_can_alloc()
On Wed, 4 Sept 2024 at 18:46, Maciej Fijalkowski
<maciej.fijalkowski@...el.com> wrote:
>
> We have STAT_FILL_EMPTY test case in xskxceiver that tries to process
> traffic with fill queue being empty which currently fails for zero copy
> ice driver after it started to use xsk_buff_can_alloc() API. That is
> because xsk_queue::queue_empty_descs is currently only increased from
> alloc APIs and right now if driver sees that xsk_buff_pool will be
> unable to provide the requested count of buffers, it bails out early,
> skipping calls to xsk_buff_alloc{_batch}().
>
> Mentioned statistic should be handled in xsk_buff_can_alloc() from the
> very beginning, so let's add this logic now. Do it by open coding
> xskq_cons_has_entries() and bumping queue_empty_descs in the middle when
> fill queue currently has no entries.
Thanks Maciej.
Acked-by: Magnus Karlsson <magnus.karlsson@...el.com>
> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
> ---
> net/xdp/xsk_buff_pool.c | 11 ++++++++++-
> net/xdp/xsk_queue.h | 5 -----
> 2 files changed, 10 insertions(+), 6 deletions(-)
>
> diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
> index c0e0204b9630..29afa880ffa0 100644
> --- a/net/xdp/xsk_buff_pool.c
> +++ b/net/xdp/xsk_buff_pool.c
> @@ -656,9 +656,18 @@ EXPORT_SYMBOL(xp_alloc_batch);
>
> bool xp_can_alloc(struct xsk_buff_pool *pool, u32 count)
> {
> + u32 req_count, avail_count;
> +
> if (pool->free_list_cnt >= count)
> return true;
> - return xskq_cons_has_entries(pool->fq, count - pool->free_list_cnt);
> + req_count = count - pool->free_list_cnt;
> +
> + avail_count = xskq_cons_nb_entries(pool->fq, req_count);
> +
> + if (!avail_count)
> + pool->fq->queue_empty_descs++;
> +
> + return avail_count >= req_count;
> }
> EXPORT_SYMBOL(xp_can_alloc);
>
> diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
> index 6f2d1621c992..406b20dfee8d 100644
> --- a/net/xdp/xsk_queue.h
> +++ b/net/xdp/xsk_queue.h
> @@ -306,11 +306,6 @@ static inline u32 xskq_cons_nb_entries(struct xsk_queue *q, u32 max)
> return entries >= max ? max : entries;
> }
>
> -static inline bool xskq_cons_has_entries(struct xsk_queue *q, u32 cnt)
> -{
> - return xskq_cons_nb_entries(q, cnt) >= cnt;
> -}
> -
> static inline bool xskq_cons_peek_addr_unchecked(struct xsk_queue *q, u64 *addr)
> {
> if (q->cached_prod == q->cached_cons)
> --
> 2.34.1
>
>
Powered by blists - more mailing lists