lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 17 Apr 2021 01:00:34 +0200
From:   Daniel Borkmann <daniel@...earbox.net>
To:     Lorenzo Bianconi <lorenzo.bianconi@...hat.com>,
        Magnus Karlsson <magnus.karlsson@...il.com>
Cc:     Lorenzo Bianconi <lorenzo@...nel.org>, bpf <bpf@...r.kernel.org>,
        Network Development <netdev@...r.kernel.org>,
        "David S. Miller" <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>,
        Alexei Starovoitov <ast@...nel.org>, shayagr@...zon.com,
        sameehj@...zon.com, John Fastabend <john.fastabend@...il.com>,
        David Ahern <dsahern@...nel.org>,
        Jesper Dangaard Brouer <brouer@...hat.com>,
        Eelco Chaudron <echaudro@...hat.com>,
        Jason Wang <jasowang@...hat.com>,
        Alexander Duyck <alexander.duyck@...il.com>,
        Saeed Mahameed <saeed@...nel.org>,
        "Fijalkowski, Maciej" <maciej.fijalkowski@...el.com>,
        Tirthendu <tirthendu.sarkar@...el.com>
Subject: Re: [PATCH v8 bpf-next 00/14] mvneta: introduce XDP multi-buffer
 support

On 4/16/21 11:29 PM, Lorenzo Bianconi wrote:
>>
>> Took your patches for a test run with the AF_XDP sample xdpsock on an
>> i40e card and the throughput degradation is between 2 to 6% depending
>> on the setup and microbenchmark within xdpsock that is executed. And
>> this is without sending any multi frame packets. Just single frame
>> ones. Tirtha made changes to the i40e driver to support this new
>> interface so that is being included in the measurements.
> 
> thx for working on it. Assuming the fragmented part is only initialized/accessed
> if mb is set (so for multi frame packets), I would not expect any throughput
> degradation in the single frame scenario. Can you please share the i40e
> support added by Tirtha?

Thanks Tirtha & Magnus for adding and testing mb support for i40e, and sharing those
data points; a degradation between 2-6% when mb is not used would definitely not be
acceptable. Would be great to root-cause and debug this further with Lorenzo, there
really should be close to /zero/ additional overhead to avoid regressing existing
performance sensitive workloads like load balancers, etc once they upgrade their
kernels/drivers.

>> What performance do you see with the mvneta card? How much are we
>> willing to pay for this feature when it is not being used or can we in
>> some way selectively turn it on only when needed?
> 
> IIRC I did not get sensible throughput degradation on mvneta but I will re-run
> the tests running an updated bpf-next tree.

But compared to i40e, mvneta is also only 1-2.5 Gbps so potentially less visible,
right [0]? Either way, it's definitely good to get more data points from benchmarking
given this was lacking before for higher speed NICs in particular.

Thanks everyone,
Daniel

   [0] https://doc.dpdk.org/guides/nics/mvneta.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ