lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 22 Sep 2016 07:46:23 -0700
From:   Eric Dumazet <eric.dumazet@...il.com>
To:     Jesper Dangaard Brouer <brouer@...hat.com>
Cc:     Tom Herbert <tom@...bertland.com>, Thomas Graf <tgraf@...g.ch>,
        "David S. Miller" <davem@...emloft.net>,
        Linux Kernel Network Developers <netdev@...r.kernel.org>,
        Kernel Team <kernel-team@...com>,
        Tariq Toukan <tariqt@...lanox.com>,
        Brenden Blanco <bblanco@...mgrid.com>,
        Alexei Starovoitov <alexei.starovoitov@...il.com>
Subject: Re: [PATCH RFC 1/3] xdp: Infrastructure to generalize XDP

On Thu, 2016-09-22 at 15:14 +0200, Jesper Dangaard Brouer wrote:
> On Wed, 21 Sep 2016 21:56:58 +0200
> Jesper Dangaard Brouer <brouer@...hat.com> wrote:
> 
> > > > I'm not opposed to running non-BPF code at XDP. I'm against adding
> > > > a linked list of hook consumers.  
> > 
> > I also worry about the performance impact of a linked list.  We should
> > simple benchmark it instead of discussing it! ;-)
> 
> (Note, there are some stability issue with this RFC patchset, when
> removing the xdp program, that I had to workaround/patch)
> 
> 
> I've started benchmarking this and I only see added cost of 2.89ns from
> these patches, at these crazy speeds it does correspond to -485Kpps.

I claim the methodology is too biased.

At full speed, all the extra code is hot in caches, and your core has
full access to memory bus anyway. Even branch predictor has fresh
information.

Now, in a mixed workload, where all cores compete to access L2/L3 and
RAM, things might be very different.

Testing icache/dcache pressure is not a matter of measuring how many
Kpps you add or remove on a hot path.

A latency test, when other cpus are busy reading/writing all over
memory, and your caches are cold, would be useful.



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ