lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 19 May 2020 12:15:12 +0200
From:   Jesper Dangaard Brouer <brouer@...hat.com>
To:     Hangbin Liu <liuhangbin@...il.com>
Cc:     Toke Høiland-Jørgensen <toke@...hat.com>,
        bpf@...r.kernel.org, netdev@...r.kernel.org,
        Jiri Benc <jbenc@...hat.com>,
        Eelco Chaudron <echaudro@...hat.com>, ast@...nel.org,
        Daniel Borkmann <daniel@...earbox.net>,
        Lorenzo Bianconi <lorenzo.bianconi@...hat.com>,
        brouer@...hat.com
Subject: Re: [RFC PATCHv2 bpf-next 1/2] xdp: add a new helper for dev map
 multicast support

On Mon, 18 May 2020 16:45:27 +0800
Hangbin Liu <liuhangbin@...il.com> wrote:

> Hi Toke,
> 
> On Fri, Apr 24, 2020 at 04:34:49PM +0200, Toke Høiland-Jørgensen wrote:
> > 
> > Yeah, the new helper is much cleaner!
> >   
> > > To achive this I add a new ex_map for struct bpf_redirect_info.
> > > in the helper I set tgt_value to NULL to make a difference with
> > > bpf_xdp_redirect_map()
> > >
> > > We also add a flag *BPF_F_EXCLUDE_INGRESS* incase you don't want to
> > > create a exclude map for each interface and just want to exclude the
> > > ingress interface.
> > >
> > > The general data path is kept in net/core/filter.c. The native data
> > > path is in kernel/bpf/devmap.c so we can use direct calls to
> > > get better performace.  
> > 
> > Got any performance numbers? :)  
> 
> Recently I tried with pktgen to get the performance number. It works
> with native mode, although the number looks not high.
> 
> I tested it on VM with 1 cpu core. 

Performance testing on a VM doesn't really make much sense.

> By forwarding to 7 ports, With pktgen
> config like:
> echo "count 10000000" > /proc/net/pktgen/veth0
> echo "clone_skb 0" > /proc/net/pktgen/veth0
> echo "pkt_size 64" > /proc/net/pktgen/veth0
> echo "dst 224.1.1.10" > /proc/net/pktgen/veth0
> 
> I got forwarding number like:
> Forwarding     159958 pkt/s
> Forwarding     160213 pkt/s
> Forwarding     160448 pkt/s
> 
> But when testing generic mode, I got system crashed directly. The code
> path is:
> do_xdp_generic()
>   - netif_receive_generic_xdp()
>     - pskb_expand_head()    <- skb_is_nonlinear(skb)
>       - BUG_ON(skb_shared(skb))
> 
> So I want to ask do you have the same issue with pktgen? Any workaround?

Pktgen is not meant to be used on virtual devices.

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ