lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1a22e7e9-e6ef-028f-dffa-e954207dc24d@redhat.com>
Date:   Wed, 17 Aug 2022 14:39:39 +0200
From:   Jesper Dangaard Brouer <jbrouer@...hat.com>
To:     Lorenzo Bianconi <lorenzo@...nel.org>, bpf@...r.kernel.org
Cc:     brouer@...hat.com, ast@...nel.org, daniel@...earbox.net,
        andrii@...nel.org, netdev@...r.kernel.org, davem@...emloft.net,
        kuba@...nel.org, edumazet@...gle.com, pabeni@...hat.com,
        hawk@...nel.org, john.fastabend@...il.com,
        lorenzo.bianconi@...hat.com
Subject: Re: [PATCH v2 bpf-next] xdp: report rx queue index in xdp_frame


On 17/08/2022 09.40, Lorenzo Bianconi wrote:
> Report rx queue index in xdp_frame according to the xdp_buff xdp_rxq_info
> pointer. xdp_frame queue_index is currently used in cpumap code to convert
> the xdp_frame into a xdp_buff and allow the ebpf program attached to the
> map entry to differentiate traffic according to the receiving hw queue.
> xdp_frame size is not increased adding queue_index since an alignment
> padding in the structure is used to insert queue_index field.
> 
> Signed-off-by: Lorenzo Bianconi <lorenzo@...nel.org>

(Sorry, I replied to v1 and not this v2.)

I'm still unsure about this change, because the XDP-hints will also
contain the rx_queue number.  And placing it in XDP-hints automatically
makes it avail for AF_XDP consumers.

I do think it is relevant for the BPF-prog to get access to the rx_queue
index, because it can be used for scaling the workload.


> ---
> Changes since v1:
> - rebase on top of bpf-next
> ---
>   include/net/xdp.h   | 2 ++
>   kernel/bpf/cpumap.c | 2 +-
>   2 files changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/include/net/xdp.h b/include/net/xdp.h
> index 04c852c7a77f..3567866b0af5 100644
> --- a/include/net/xdp.h
> +++ b/include/net/xdp.h
> @@ -172,6 +172,7 @@ struct xdp_frame {
>   	struct xdp_mem_info mem;
>   	struct net_device *dev_rx; /* used by cpumap */
>   	u32 flags; /* supported values defined in xdp_buff_flags */
> +	u32 queue_index;
>   };
>   
>   static __always_inline bool xdp_frame_has_frags(struct xdp_frame *frame)
> @@ -301,6 +302,7 @@ struct xdp_frame *xdp_convert_buff_to_frame(struct xdp_buff *xdp)
>   
>   	/* rxq only valid until napi_schedule ends, convert to xdp_mem_info */
>   	xdp_frame->mem = xdp->rxq->mem;
> +	xdp_frame->queue_index = xdp->rxq->queue_index;
>   
>   	return xdp_frame;
>   }
> diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
> index b5ba34ddd4b6..48003450c98c 100644
> --- a/kernel/bpf/cpumap.c
> +++ b/kernel/bpf/cpumap.c
> @@ -228,7 +228,7 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,
>   
>   		rxq.dev = xdpf->dev_rx;
>   		rxq.mem = xdpf->mem;
> -		/* TODO: report queue_index to xdp_rxq_info */
> +		rxq.queue_index = xdpf->queue_index;
>   
>   		xdp_convert_frame_to_buff(xdpf, &xdp);
>   

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ