lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160922151403.24648381@redhat.com>
Date:   Thu, 22 Sep 2016 15:14:03 +0200
From:   Jesper Dangaard Brouer <brouer@...hat.com>
To:     Tom Herbert <tom@...bertland.com>
Cc:     Thomas Graf <tgraf@...g.ch>,
        "David S. Miller" <davem@...emloft.net>,
        Linux Kernel Network Developers <netdev@...r.kernel.org>,
        Kernel Team <kernel-team@...com>,
        Tariq Toukan <tariqt@...lanox.com>,
        Brenden Blanco <bblanco@...mgrid.com>,
        Alexei Starovoitov <alexei.starovoitov@...il.com>,
        Eric Dumazet <eric.dumazet@...il.com>, brouer@...hat.com
Subject: Re: [PATCH RFC 1/3] xdp: Infrastructure to generalize XDP

On Wed, 21 Sep 2016 21:56:58 +0200
Jesper Dangaard Brouer <brouer@...hat.com> wrote:

> > > I'm not opposed to running non-BPF code at XDP. I'm against adding
> > > a linked list of hook consumers.  
> 
> I also worry about the performance impact of a linked list.  We should
> simple benchmark it instead of discussing it! ;-)

(Note, there are some stability issue with this RFC patchset, when
removing the xdp program, that I had to workaround/patch)


I've started benchmarking this and I only see added cost of 2.89ns from
these patches, at these crazy speeds it does correspond to -485Kpps.

 I was really expecting to see a higher cost of this approach.

I tested this on two different machines. One was suppose to work with
DDIO, but I could not get DDIO working on that machine (result in max
12.7Mpps drop).  Even-though the mlx5 card does work with DDIO.  Even
removed the mlx5 and used same slot but no luck.   (A side-note: Also
measured a 16ns performance difference between which PCIe slot I'm
using).

The reason I wanted to benchmark this on a DDIO machine is, that I'm
suspecting that the added cost, could be hiding behind the cache miss.

Well, I'm running out-of-time benchmarking this stuff, I must prepare
for my Network Performance Workshop ;-)


(A side-note: my skylake motherboard also had a PCI slot, so I found an
old e1000 NIC in my garage, and it worked!)
-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  Author of http://www.iptv-analyzer.org
  LinkedIn: http://www.linkedin.com/in/brouer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ