lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJ8uoz1fZ3zYVKergPn-QYRQEpPfC_jNgtY3wzoxxJWFF22LKA@mail.gmail.com>
Date: Mon, 24 Feb 2025 13:55:41 +0100
From: Magnus Karlsson <magnus.karlsson@...il.com>
To: Wang Liang <wangliang74@...wei.com>
Cc: bjorn@...nel.org, magnus.karlsson@...el.com, maciej.fijalkowski@...el.com, 
	jonathan.lemon@...il.com, davem@...emloft.net, edumazet@...gle.com, 
	kuba@...nel.org, pabeni@...hat.com, horms@...nel.org, ast@...nel.org, 
	daniel@...earbox.net, hawk@...nel.org, john.fastabend@...il.com, 
	yuehaibing@...wei.com, zhangchangzhong@...wei.com, netdev@...r.kernel.org, 
	bpf@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH net] xsk: fix __xsk_generic_xmit() error code when cq is full

On Sat, 22 Feb 2025 at 10:18, Wang Liang <wangliang74@...wei.com> wrote:
>
> When the cq reservation is failed, the error code is not set which is
> initialized to zero in __xsk_generic_xmit(). That means the packet is not
> send successfully but sendto() return ok.
>
> Set the error code and make xskq_prod_reserve_addr()/xskq_prod_reserve()
> return values more meaningful when the queue is full.

Hi Wang,

I agree that this would have been a really good idea if it was
implemented from day one, but now I do not dare to change this since
it would be changing the uapi. Let us say you have the following quite
common code snippet for sending a packet with AF_XDP in skb mode:

err = sendmsg();
if (err && err != -EAGAIN && err != -EBUSY)
    goto die_due_to_error;
continue with code

This code would with your change go and die suddenly when the
completion ring is full instead of working. Maybe there is a piece of
code that cleans the completion ring after these lines of code and
next time sendmsg() is called, the packet will get sent, so the
application used to work.

So I say: let us not do this. But if anyone has another opinion, please share.

Thanks for the report: Magnus

> Signed-off-by: Wang Liang <wangliang74@...wei.com>
> ---
>  net/xdp/xsk.c       | 3 ++-
>  net/xdp/xsk_queue.h | 4 ++--
>  2 files changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> index 89d2bef96469..7d0d2f40ca57 100644
> --- a/net/xdp/xsk.c
> +++ b/net/xdp/xsk.c
> @@ -802,7 +802,8 @@ static int __xsk_generic_xmit(struct sock *sk)
>                  * if there is space in it. This avoids having to implement
>                  * any buffering in the Tx path.
>                  */
> -               if (xsk_cq_reserve_addr_locked(xs->pool, desc.addr))
> +               err = xsk_cq_reserve_addr_locked(xs->pool, desc.addr);
> +               if (err)
>                         goto out;
>
>                 skb = xsk_build_skb(xs, &desc);
> diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
> index 46d87e961ad6..ac90b7fcc027 100644
> --- a/net/xdp/xsk_queue.h
> +++ b/net/xdp/xsk_queue.h
> @@ -371,7 +371,7 @@ static inline void xskq_prod_cancel_n(struct xsk_queue *q, u32 cnt)
>  static inline int xskq_prod_reserve(struct xsk_queue *q)
>  {
>         if (xskq_prod_is_full(q))
> -               return -ENOSPC;
> +               return -ENOBUFS;
>
>         /* A, matches D */
>         q->cached_prod++;
> @@ -383,7 +383,7 @@ static inline int xskq_prod_reserve_addr(struct xsk_queue *q, u64 addr)
>         struct xdp_umem_ring *ring = (struct xdp_umem_ring *)q->ring;
>
>         if (xskq_prod_is_full(q))
> -               return -ENOSPC;
> +               return -ENOBUFS;
>
>         /* A, matches D */
>         ring->desc[q->cached_prod++ & q->ring_mask] = addr;
> --
> 2.34.1
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ