lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJ8uoz2yjB7nj495x3CuiwHfuU+T0g3MXy4DScG2iT6gtkQsqg@mail.gmail.com>
Date: Thu, 12 Sep 2024 13:04:17 +0200
From: Magnus Karlsson <magnus.karlsson@...il.com>
To: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
Cc: bpf@...r.kernel.org, ast@...nel.org, daniel@...earbox.net, 
	andrii@...nel.org, netdev@...r.kernel.org, magnus.karlsson@...el.com, 
	bjorn@...nel.org, Dries De Winter <ddewinter@...amedia.com>
Subject: Re: [PATCH bpf] xsk: fix batch alloc API on non-coherent systems

On Wed, 11 Sept 2024 at 21:10, Maciej Fijalkowski
<maciej.fijalkowski@...el.com> wrote:
>
> In cases when synchronizing DMA operations is necessary,
> xsk_buff_alloc_batch() returns a single buffer instead of the requested
> count. This puts the pressure on drivers that use batch API as they have
> to check for this corner case on their side and take care of allocations
> by themselves, which feels counter productive. Let us improve the core
> by looping over xp_alloc() @max times when slow path needs to be taken.
>
> Another issue with current interface, as spotted and fixed by Dries, was
> that when driver called xsk_buff_alloc_batch() with @max == 0, for slow
> path case it still allocated and returned a single buffer, which should
> not happen. By introducing the logic from first paragraph we kill two
> birds with one stone and address this problem as well.

Thanks Maciej and Dries for finding and fixing this.

Acked-by: Magnus Karlsson <magnus.karlsson@...el.com>

> Fixes: 47e4075df300 ("xsk: Batched buffer allocation for the pool")
> Reported-and-tested-by: Dries De Winter <ddewinter@...amedia.com>
> Co-developed-by: Dries De Winter <ddewinter@...amedia.com>
> Signed-off-by: Dries De Winter <ddewinter@...amedia.com>
> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
> ---
>  net/xdp/xsk_buff_pool.c | 25 ++++++++++++++++++-------
>  1 file changed, 18 insertions(+), 7 deletions(-)
>
> diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
> index 29afa880ffa0..5e2e03042ef3 100644
> --- a/net/xdp/xsk_buff_pool.c
> +++ b/net/xdp/xsk_buff_pool.c
> @@ -623,20 +623,31 @@ static u32 xp_alloc_reused(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u3
>         return nb_entries;
>  }
>
> -u32 xp_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max)
> +static u32 xp_alloc_slow(struct xsk_buff_pool *pool, struct xdp_buff **xdp,
> +                        u32 max)
>  {
> -       u32 nb_entries1 = 0, nb_entries2;
> +       int i;
>
> -       if (unlikely(pool->dev && dma_dev_need_sync(pool->dev))) {
> +       for (i = 0; i < max; i++) {
>                 struct xdp_buff *buff;
>
> -               /* Slow path */
>                 buff = xp_alloc(pool);
> -               if (buff)
> -                       *xdp = buff;
> -               return !!buff;
> +               if (unlikely(!buff))
> +                       return i;
> +               *xdp = buff;
> +               xdp++;
>         }
>
> +       return max;
> +}
> +
> +u32 xp_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max)
> +{
> +       u32 nb_entries1 = 0, nb_entries2;
> +
> +       if (unlikely(pool->dev && dma_dev_need_sync(pool->dev)))
> +               return xp_alloc_slow(pool, xdp, max);
> +
>         if (unlikely(pool->free_list_cnt)) {
>                 nb_entries1 = xp_alloc_reused(pool, xdp, max);
>                 if (nb_entries1 == max)
> --
> 2.34.1
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ