lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0bcdd667-1811-4bde-8313-1a7e3abe55ad@redhat.com>
Date: Thu, 27 Nov 2025 12:35:47 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: Jason Xing <kerneljasonxing@...il.com>, davem@...emloft.net,
 edumazet@...gle.com, kuba@...nel.org, bjorn@...nel.org,
 magnus.karlsson@...el.com, maciej.fijalkowski@...el.com,
 jonathan.lemon@...il.com, sdf@...ichev.me, ast@...nel.org,
 daniel@...earbox.net, hawk@...nel.org, john.fastabend@...il.com
Cc: bpf@...r.kernel.org, netdev@...r.kernel.org,
 Jason Xing <kernelxing@...cent.com>
Subject: Re: [PATCH net-next v2 2/3] xsk: use atomic operations around
 cached_prod for copy mode

On 11/25/25 9:54 AM, Jason Xing wrote:
> diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
> index 44cc01555c0b..3a023791b273 100644
> --- a/net/xdp/xsk_queue.h
> +++ b/net/xdp/xsk_queue.h
> @@ -402,13 +402,28 @@ static inline void xskq_prod_cancel_n(struct xsk_queue *q, u32 cnt)
>  	q->cached_prod -= cnt;
>  }
>  
> -static inline int xskq_prod_reserve(struct xsk_queue *q)
> +static inline bool xsk_cq_cached_prod_nb_free(struct xsk_queue *q)
>  {
> -	if (xskq_prod_is_full(q))
> +	u32 cached_prod = atomic_read(&q->cached_prod_atomic);
> +	u32 free_entries = q->nentries - (cached_prod - q->cached_cons);
> +
> +	if (free_entries)
> +		return true;
> +
> +	/* Refresh the local tail pointer */
> +	q->cached_cons = READ_ONCE(q->ring->consumer);
> +	free_entries = q->nentries - (cached_prod - q->cached_cons);
> +
> +	return free_entries ? true : false;
> +}
_If_ different CPUs can call xsk_cq_cached_prod_reserve() simultaneously
(as the spinlock existence suggests) the above change introduce a race:

xsk_cq_cached_prod_nb_free() can return true when num_free == 1  on
CPU1, and xsk_cq_cached_prod_reserve increment cached_prod_atomic on
CPU2 before CPU1 completed xsk_cq_cached_prod_reserve().

/P


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ