lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aDnX3FVPZ3AIZDGg@mini-arch>
Date: Fri, 30 May 2025 09:07:56 -0700
From: Stanislav Fomichev <stfomichev@...il.com>
To: "e.kubanski" <e.kubanski@...tner.samsung.com>
Cc: netdev@...r.kernel.org, linux-kernel@...r.kernel.org, bjorn@...nel.org,
	magnus.karlsson@...el.com, maciej.fijalkowski@...el.com,
	jonathan.lemon@...il.com
Subject: Re: [PATCH bpf v2] xsk: Fix out of order segment free in
 __xsk_generic_xmit()

On 05/30, e.kubanski wrote:
> Move xsk completion queue descriptor write-back to destructor.
> 
> Fix xsk descriptor management in completion queue. Descriptor
> management mechanism didn't take care of situations where
> completion queue submission can happen out-of-order to
> descriptor write-back.
> 
> __xsk_generic_xmit() was assigning descriptor to slot right
> after completion queue slot reservation. If multiple CPUs
> access the same completion queue after xmit, this can result
> in out-of-order submission of invalid descriptor batch.
> SKB destructor call can submit descriptor batch that is
> currently in use by other CPU, instead of correct transmitted
> ones. This could result in User-Space <-> Kernel-Space data race.
> 
> Forbid possible out-of-order submissions:
> CPU A: Reservation + Descriptor Write
> CPU B: Reservation + Descriptor Write
> CPU B: Submit (submitted first batch reserved by CPU A)
> CPU A: Submit (submitted second batch reserved by CPU B)
> 
> Move Descriptor Write to submission phase:
> CPU A: Reservation (only moves local writer)
> CPU B: Reservation (only moves local writer)
> CPU B: Descriptor Write + Submit
> CPU A: Descriptor Write + Submit
> 
> This solves potential out-of-order free of xsk buffers.

I'm not sure I understand what's the issue here. If you're using the
same XSK from different CPUs, you should take care of the ordering
yourself on the userspace side?

> Signed-off-by: Eryk Kubanski <e.kubanski@...tner.samsung.com>
> Fixes: e6c4047f5122 ("xsk: Use xsk_buff_pool directly for cq functions")
> ---
>  include/linux/skbuff.h |  2 ++
>  net/xdp/xsk.c          | 17 +++++++++++------
>  net/xdp/xsk_queue.h    | 11 +++++++++++
>  3 files changed, 24 insertions(+), 6 deletions(-)
> 
> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
> index 5520524c93bf..cc37b62638cd 100644
> --- a/include/linux/skbuff.h
> +++ b/include/linux/skbuff.h
> @@ -624,6 +624,8 @@ struct skb_shared_info {
>  		void		*destructor_arg;
>  	};
>  
> +	u64 xsk_descs[MAX_SKB_FRAGS];

This is definitely a no-go (sk_buff and skb_shared_info space is
precious).

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ