[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACGkMEsqKQD_mBRB5FQwoOTR-gq1Br1oEdtEoxBLhbCSt4SRgA@mail.gmail.com>
Date: Tue, 1 Mar 2022 09:47:28 +0800
From: Jason Wang <jasowang@...hat.com>
To: Stephen Hemminger <stephen@...workplumber.org>
Cc: Harold Huang <baymaxhuang@...il.com>,
netdev <netdev@...r.kernel.org>, Paolo Abeni <pabeni@...hat.com>,
"David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Jesper Dangaard Brouer <hawk@...nel.org>,
John Fastabend <john.fastabend@...il.com>,
open list <linux-kernel@...r.kernel.org>,
"open list:XDP (eXpress Data Path)" <bpf@...r.kernel.org>
Subject: Re: [PATCH net-next v3] tun: support NAPI for packets received from
batched XDP buffs
On Tue, Mar 1, 2022 at 1:15 AM Stephen Hemminger
<stephen@...workplumber.org> wrote:
>
> On Mon, 28 Feb 2022 15:46:56 +0800
> Jason Wang <jasowang@...hat.com> wrote:
>
> > On Mon, Feb 28, 2022 at 11:38 AM Harold Huang <baymaxhuang@...il.com> wrote:
> > >
> > > In tun, NAPI is supported and we can also use NAPI in the path of
> > > batched XDP buffs to accelerate packet processing. What is more, after
> > > we use NAPI, GRO is also supported. The iperf shows that the throughput of
> > > single stream could be improved from 4.5Gbps to 9.2Gbps. Additionally, 9.2
> > > Gbps nearly reachs the line speed of the phy nic and there is still about
> > > 15% idle cpu core remaining on the vhost thread.
> > >
> > > Test topology:
> > > [iperf server]<--->tap<--->dpdk testpmd<--->phy nic<--->[iperf client]
> > >
> > > Iperf stream:
> > > iperf3 -c 10.0.0.2 -i 1 -t 10
> > >
> > > Before:
> > > ...
> > > [ 5] 5.00-6.00 sec 558 MBytes 4.68 Gbits/sec 0 1.50 MBytes
> > > [ 5] 6.00-7.00 sec 556 MBytes 4.67 Gbits/sec 1 1.35 MBytes
> > > [ 5] 7.00-8.00 sec 556 MBytes 4.67 Gbits/sec 2 1.18 MBytes
> > > [ 5] 8.00-9.00 sec 559 MBytes 4.69 Gbits/sec 0 1.48 MBytes
> > > [ 5] 9.00-10.00 sec 556 MBytes 4.67 Gbits/sec 1 1.33 MBytes
> > > - - - - - - - - - - - - - - - - - - - - - - - - -
> > > [ ID] Interval Transfer Bitrate Retr
> > > [ 5] 0.00-10.00 sec 5.39 GBytes 4.63 Gbits/sec 72 sender
> > > [ 5] 0.00-10.04 sec 5.39 GBytes 4.61 Gbits/sec receiver
> > >
> > > After:
> > > ...
> > > [ 5] 5.00-6.00 sec 1.07 GBytes 9.19 Gbits/sec 0 1.55 MBytes
> > > [ 5] 6.00-7.00 sec 1.08 GBytes 9.30 Gbits/sec 0 1.63 MBytes
> > > [ 5] 7.00-8.00 sec 1.08 GBytes 9.25 Gbits/sec 0 1.72 MBytes
> > > [ 5] 8.00-9.00 sec 1.08 GBytes 9.25 Gbits/sec 77 1.31 MBytes
> > > [ 5] 9.00-10.00 sec 1.08 GBytes 9.24 Gbits/sec 0 1.48 MBytes
> > > - - - - - - - - - - - - - - - - - - - - - - - - -
> > > [ ID] Interval Transfer Bitrate Retr
> > > [ 5] 0.00-10.00 sec 10.8 GBytes 9.28 Gbits/sec 166 sender
> > > [ 5] 0.00-10.04 sec 10.8 GBytes 9.24 Gbits/sec receiver
> > >
> > > Reported-at: https://lore.kernel.org/all/CACGkMEvTLG0Ayg+TtbN4q4pPW-ycgCCs3sC3-TF8cuRTf7Pp1A@mail.gmail.com
> > > Signed-off-by: Harold Huang <baymaxhuang@...il.com>
> >
> > Acked-by: Jason Wang <jasowang@...hat.com>
>
> Would this help when using sendmmsg and recvmmsg on the TAP device?
We haven't exported the socket object of tuntap to userspace. So we
can't use sendmmsg()/recvmsg() now.
> Asking because interested in speeding up another use of TAP device, and wondering
> if this would help.
>
Yes, it would be interesting. We need someone to work on that.
Thanks
Powered by blists - more mailing lists