lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 20 Apr 2021 18:54:40 +0200
From:   Jesper Dangaard Brouer <brouer@...hat.com>
To:     Lorenzo Bianconi <lorenzo@...nel.org>
Cc:     bpf@...r.kernel.org, netdev@...r.kernel.org,
        lorenzo.bianconi@...hat.com, davem@...emloft.net, kuba@...nel.org,
        ast@...nel.org, daniel@...earbox.net, song@...nel.org,
        toke@...hat.com, brouer@...hat.com
Subject: Re: [PATCH v3 bpf-next] cpumap: bulk skb using
 netif_receive_skb_list

On Tue, 20 Apr 2021 16:05:14 +0200
Lorenzo Bianconi <lorenzo@...nel.org> wrote:

> Rely on netif_receive_skb_list routine to send skbs converted from
> xdp_frames in cpu_map_kthread_run in order to improve i-cache usage.
> The proposed patch has been tested running xdp_redirect_cpu bpf sample
> available in the kernel tree that is used to redirect UDP frames from
> ixgbe driver to a cpumap entry and then to the networking stack.
> UDP frames are generated using pkt_gen. Packets are discarded by the
> UDP layer.
> 
> $xdp_redirect_cpu  --cpu <cpu> --progname xdp_cpu_map0 --dev <eth>
> 
> bpf-next: ~2.35Mpps
> bpf-next + cpumap skb-list: ~2.72Mpps
> 
> Since netif_receive_skb_list does not return number of discarded packets,
> remove drop counter from xdp_cpumap_kthread tracepoint and update related
> xdp samples.
> 
> Signed-off-by: Lorenzo Bianconi <lorenzo@...nel.org>
> ---
> Changes since v2:
> - remove drop counter and update related xdp samples
> - rebased on top of bpf-next
> 
> Changes since v1:
> - fixed comment
> - rebased on top of bpf-next tree
> ---
>  include/trace/events/xdp.h          | 14 +++++---------
>  kernel/bpf/cpumap.c                 | 16 +++++++---------
>  samples/bpf/xdp_monitor_kern.c      |  6 ++----
>  samples/bpf/xdp_monitor_user.c      | 14 ++++++--------
>  samples/bpf/xdp_redirect_cpu_kern.c | 12 +++++-------
>  samples/bpf/xdp_redirect_cpu_user.c | 10 ++++------
>  6 files changed, 29 insertions(+), 43 deletions(-)
> 
> diff --git a/include/trace/events/xdp.h b/include/trace/events/xdp.h
> index fcad3645a70b..52ecfe9c7e25 100644
> --- a/include/trace/events/xdp.h
> +++ b/include/trace/events/xdp.h
> @@ -184,16 +184,15 @@ DEFINE_EVENT(xdp_redirect_template, xdp_redirect_map_err,
>  
>  TRACE_EVENT(xdp_cpumap_kthread,
>  
> -	TP_PROTO(int map_id, unsigned int processed,  unsigned int drops,
> -		 int sched, struct xdp_cpumap_stats *xdp_stats),
> +	TP_PROTO(int map_id, unsigned int processed, int sched,
> +		 struct xdp_cpumap_stats *xdp_stats),
>  
> -	TP_ARGS(map_id, processed, drops, sched, xdp_stats),
> +	TP_ARGS(map_id, processed, sched, xdp_stats),
>  
>  	TP_STRUCT__entry(
>  		__field(int, map_id)
>  		__field(u32, act)
>  		__field(int, cpu)
> -		__field(unsigned int, drops)
>  		__field(unsigned int, processed)

So, struct member @processed will takeover the room for @drops.

Can you please test how an old xdp_monitor program will react to this?
Will it fail, or extract and show wrong values?

The xdp_mointor tool is in several external git repos:

 https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/samples/bpf/xdp_monitor_kern.c
 https://github.com/xdp-project/xdp-tutorial/tree/master/tracing02-xdp-monitor

Do you have any plans for fixing those tools?


>  		__field(int, sched)
>  		__field(unsigned int, xdp_pass)
> @@ -205,7 +204,6 @@ TRACE_EVENT(xdp_cpumap_kthread,
>  		__entry->map_id		= map_id;
>  		__entry->act		= XDP_REDIRECT;
>  		__entry->cpu		= smp_processor_id();
> -		__entry->drops		= drops;
>  		__entry->processed	= processed;
>  		__entry->sched	= sched;
>  		__entry->xdp_pass	= xdp_stats->pass;
> @@ -215,13 +213,11 @@ TRACE_EVENT(xdp_cpumap_kthread,
>  
>  	TP_printk("kthread"
>  		  " cpu=%d map_id=%d action=%s"
> -		  " processed=%u drops=%u"
> -		  " sched=%d"
> +		  " processed=%u sched=%u"
>  		  " xdp_pass=%u xdp_drop=%u xdp_redirect=%u",
>  		  __entry->cpu, __entry->map_id,
>  		  __print_symbolic(__entry->act, __XDP_ACT_SYM_TAB),
> -		  __entry->processed, __entry->drops,
> -		  __entry->sched,
> +		  __entry->processed, __entry->sched,
>  		  __entry->xdp_pass, __entry->xdp_drop, __entry->xdp_redirect)
>  );



-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ