[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJ8uoz0BZGOuSP_vq8GLMxn=Y1W4RQJiK6ED1Cj9Y+GuLxvbbg@mail.gmail.com>
Date: Tue, 24 Apr 2018 11:14:54 +0200
From: Magnus Karlsson <magnus.karlsson@...il.com>
To: Jason Wang <jasowang@...hat.com>
Cc: Björn Töpel <bjorn.topel@...il.com>,
"Karlsson, Magnus" <magnus.karlsson@...el.com>,
Alexander Duyck <alexander.h.duyck@...el.com>,
Alexander Duyck <alexander.duyck@...il.com>,
John Fastabend <john.fastabend@...il.com>,
Alexei Starovoitov <ast@...com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Willem de Bruijn <willemdebruijn.kernel@...il.com>,
Daniel Borkmann <daniel@...earbox.net>,
"Michael S. Tsirkin" <mst@...hat.com>,
Network Development <netdev@...r.kernel.org>,
Björn Töpel <bjorn.topel@...el.com>,
michael.lundkvist@...csson.com,
"Brandeburg, Jesse" <jesse.brandeburg@...el.com>,
"Singhai, Anjali" <anjali.singhai@...el.com>,
"Zhang, Qi Z" <qi.z.zhang@...el.com>
Subject: Re: [PATCH bpf-next 00/15] Introducing AF_XDP support
On Tue, Apr 24, 2018 at 11:10 AM, Jason Wang <jasowang@...hat.com> wrote:
>
>
> On 2018年04月24日 16:44, Magnus Karlsson wrote:
>>>>
>>>> We have run some benchmarks on a dual socket system with two Broadwell
>>>> E5 2660 @ 2.0 GHz with hyperthreading turned off. Each socket has 14
>>>> cores which gives a total of 28, but only two cores are used in these
>>>> experiments. One for TR/RX and one for the user space application. The
>>>> memory is DDR4 @ 2133 MT/s (1067 MHz) and the size of each DIMM is
>>>> 8192MB and with 8 of those DIMMs in the system we have 64 GB of total
>>>> memory. The compiler used is gcc version 5.4.0 20160609. The NIC is an
>>>> Intel I40E 40Gbit/s using the i40e driver.
>>>>
>>>> Below are the results in Mpps of the I40E NIC benchmark runs for 64
>>>> and 1500 byte packets, generated by commercial packet generator HW that
>>>> is
>>>> generating packets at full 40 Gbit/s line rate.
>>>>
>>>> AF_XDP performance 64 byte packets. Results from RFC V2 in parenthesis.
>>>> Benchmark XDP_SKB XDP_DRV
>>>> rxdrop 2.9(3.0) 9.4(9.3)
>>>> txpush 2.5(2.2) NA*
>>>> l2fwd 1.9(1.7) 2.4(2.4) (TX using XDP_SKB in both cases)
>>>
>>> This number looks not very exciting. I can get ~3Mpps when using testpmd
>>> in
>>> a guest with xdp_redirect.sh on host between ixgbe and TAP/vhost. I
>>> believe
>>> we can even better performance without virt. It would be interesting to
>>> compare this performance with e.g testpmd + virito_user(vhost_kernel) +
>>> XDP.
>>
>> Note that all the XDP_SKB numbers plus the TX part of XDP_DRV for l2fwd
>> uses SKBs and the generic XDP path in the kernel. I am not surprised those
>> numbers are lower than what you are seeing with XDP_DRV support.
>> (If that is what you are running? Unsure about your setup).
>
>
> Yes, I'm using haswell E5-2630 v3 @ 2.40GHz and ixgbe.
>
>> The
>> 9.4 Mpps for RX is what you get with the XDP_DRV support and copies
>> out to user space. Or is it this number you think is low?
>
>
> No rxdrop looks ok. I mean for l2fwd only.
OK, sounds good. l2fwd will get much better once we add XDP_DRV support for TX.
Thanks: Magnus
>> Zerocopy will be added
>> in later patch sets.
>>
>> With that said, both XDP_SKB and XDP_DRV can be optimized. We
>> have not spent that much time on optimizations at this point.
>>
>
> Yes, and it is interesting to compare the performance numbers between AF_XDP
> and TAP XDP + vhost_net since their functions are almost equivalent.
>
> Thanks
Powered by blists - more mailing lists