lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <PAXPR04MB9185E69829540686618987D289569@PAXPR04MB9185.eurprd04.prod.outlook.com>
Date:   Fri, 30 Sep 2022 19:58:18 +0000
From:   Shenwei Wang <shenwei.wang@....com>
To:     Andrew Lunn <andrew@...n.ch>
CC:     "David S . Miller" <davem@...emloft.net>,
        Eric Dumazet <edumazet@...gle.com>,
        Jakub Kicinski <kuba@...nel.org>,
        Paolo Abeni <pabeni@...hat.com>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Jesper Dangaard Brouer <hawk@...nel.org>,
        John Fastabend <john.fastabend@...il.com>,
        Wei Fang <wei.fang@....com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "imx@...ts.linux.dev" <imx@...ts.linux.dev>
Subject: RE: [EXT] Re: [PATCH 1/1] net: fec: using page pool to manage RX
 buffers



> -----Original Message-----
> From: Andrew Lunn <andrew@...n.ch>
> Sent: Friday, September 30, 2022 2:52 PM
> To: Shenwei Wang <shenwei.wang@....com>
> Cc: David S . Miller <davem@...emloft.net>; Eric Dumazet
> <edumazet@...gle.com>; Jakub Kicinski <kuba@...nel.org>; Paolo Abeni
> <pabeni@...hat.com>; Alexei Starovoitov <ast@...nel.org>; Daniel Borkmann
> <daniel@...earbox.net>; Jesper Dangaard Brouer <hawk@...nel.org>; John
> Fastabend <john.fastabend@...il.com>; Wei Fang <wei.fang@....com>;
> netdev@...r.kernel.org; linux-kernel@...r.kernel.org; imx@...ts.linux.dev
> Subject: [EXT] Re: [PATCH 1/1] net: fec: using page pool to manage RX buffers
> 
> Caution: EXT Email
> 
> On Fri, Sep 30, 2022 at 02:37:51PM -0500, Shenwei Wang wrote:
> > This patch optimizes the RX buffer management by using the page pool.
> > The purpose for this change is to prepare for the following XDP
> > support. The current driver uses one frame per page for easy
> > management.
> >
> > The following are the comparing result between page pool
> > implementation and the original implementation (non page pool).
> >
> >  --- Page Pool implementation ----
> >
> > shenwei@...0:~$ iperf -c 10.81.16.245 -w 2m -i 1
> > ------------------------------------------------------------
> > Client connecting to 10.81.16.245, TCP port 5001 TCP window size:  416
> > KByte (WARNING: requested 1.91 MByte)
> > ------------------------------------------------------------
> > [  1] local 10.81.17.20 port 43204 connected with 10.81.16.245 port 5001
> > [ ID] Interval       Transfer     Bandwidth
> > [  1] 0.0000-1.0000 sec   111 MBytes   933 Mbits/sec
> > [  1] 1.0000-2.0000 sec   111 MBytes   934 Mbits/sec
> > [  1] 2.0000-3.0000 sec   112 MBytes   935 Mbits/sec
> > [  1] 3.0000-4.0000 sec   111 MBytes   933 Mbits/sec
> > [  1] 4.0000-5.0000 sec   111 MBytes   934 Mbits/sec
> > [  1] 5.0000-6.0000 sec   111 MBytes   933 Mbits/sec
> > [  1] 6.0000-7.0000 sec   111 MBytes   931 Mbits/sec
> > [  1] 7.0000-8.0000 sec   112 MBytes   935 Mbits/sec
> > [  1] 8.0000-9.0000 sec   111 MBytes   933 Mbits/sec
> > [  1] 9.0000-10.0000 sec   112 MBytes   935 Mbits/sec
> > [  1] 0.0000-10.0077 sec  1.09 GBytes   933 Mbits/sec
> >
> >  --- Non Page Pool implementation ----
> >
> > shenwei@...0:~$ iperf -c 10.81.16.245 -w 2m -i 1
> > ------------------------------------------------------------
> > Client connecting to 10.81.16.245, TCP port 5001 TCP window size:  416
> > KByte (WARNING: requested 1.91 MByte)
> > ------------------------------------------------------------
> > [  1] local 10.81.17.20 port 49154 connected with 10.81.16.245 port 5001
> > [ ID] Interval       Transfer     Bandwidth
> > [  1] 0.0000-1.0000 sec   104 MBytes   868 Mbits/sec
> > [  1] 1.0000-2.0000 sec   105 MBytes   878 Mbits/sec
> > [  1] 2.0000-3.0000 sec   105 MBytes   881 Mbits/sec
> > [  1] 3.0000-4.0000 sec   105 MBytes   879 Mbits/sec
> > [  1] 4.0000-5.0000 sec   105 MBytes   878 Mbits/sec
> > [  1] 5.0000-6.0000 sec   105 MBytes   878 Mbits/sec
> > [  1] 6.0000-7.0000 sec   104 MBytes   875 Mbits/sec
> > [  1] 7.0000-8.0000 sec   104 MBytes   875 Mbits/sec
> > [  1] 8.0000-9.0000 sec   104 MBytes   873 Mbits/sec
> > [  1] 9.0000-10.0000 sec   104 MBytes   875 Mbits/sec
> > [  1] 0.0000-10.0073 sec  1.02 GBytes   875 Mbits/sec
> 
> What SoC? As i keep saying, the FEC is used in a lot of different SoCs, and you
> need to show this does not cause any regressions in the older SoCs. There are
> probably a lot more imx5 and imx6 devices out in the wild than imx8, which is
> what i guess you are testing on. Mainline needs to work well on them all, even if
> NXP no longer cares about the older Socs.
> 

The testing above was on the imx8 platform. The following are the testing result
On the imx6sx board:

######### Original implementation ######

shenwei@...0:~/pktgen$ iperf -c 10.81.16.245 -w 2m -i 1
------------------------------------------------------------
Client connecting to 10.81.16.245, TCP port 5001
TCP window size:  416 KByte (WARNING: requested 1.91 MByte)
------------------------------------------------------------
[  1] local 10.81.17.20 port 36486 connected with 10.81.16.245 port 5001
[ ID] Interval       Transfer     Bandwidth
[  1] 0.0000-1.0000 sec  70.5 MBytes   591 Mbits/sec
[  1] 1.0000-2.0000 sec  64.5 MBytes   541 Mbits/sec
[  1] 2.0000-3.0000 sec  73.6 MBytes   618 Mbits/sec
[  1] 3.0000-4.0000 sec  73.6 MBytes   618 Mbits/sec
[  1] 4.0000-5.0000 sec  72.9 MBytes   611 Mbits/sec
[  1] 5.0000-6.0000 sec  73.4 MBytes   616 Mbits/sec
[  1] 6.0000-7.0000 sec  73.5 MBytes   617 Mbits/sec
[  1] 7.0000-8.0000 sec  73.4 MBytes   616 Mbits/sec
[  1] 8.0000-9.0000 sec  73.4 MBytes   616 Mbits/sec
[  1] 9.0000-10.0000 sec  73.9 MBytes   620 Mbits/sec
[  1] 0.0000-10.0174 sec   723 MBytes   605 Mbits/sec


 ######  Page Pool implémentation ########

shenwei@...0:~/pktgen$ iperf -c 10.81.16.245 -w 2m -i 1
------------------------------------------------------------
Client connecting to 10.81.16.245, TCP port 5001
TCP window size:  416 KByte (WARNING: requested 1.91 MByte)
------------------------------------------------------------
[  1] local 10.81.17.20 port 57288 connected with 10.81.16.245 port 5001
[ ID] Interval       Transfer     Bandwidth
[  1] 0.0000-1.0000 sec  78.8 MBytes   661 Mbits/sec
[  1] 1.0000-2.0000 sec  82.5 MBytes   692 Mbits/sec
[  1] 2.0000-3.0000 sec  82.4 MBytes   691 Mbits/sec
[  1] 3.0000-4.0000 sec  82.4 MBytes   691 Mbits/sec
[  1] 4.0000-5.0000 sec  82.5 MBytes   692 Mbits/sec
[  1] 5.0000-6.0000 sec  82.4 MBytes   691 Mbits/sec
[  1] 6.0000-7.0000 sec  82.5 MBytes   692 Mbits/sec
[  1] 7.0000-8.0000 sec  82.4 MBytes   691 Mbits/sec
[  1] 8.0000-9.0000 sec  82.4 MBytes   691 Mbits/sec
^C[  1] 9.0000-9.5506 sec  45.0 MBytes   686 Mbits/sec
[  1] 0.0000-9.5506 sec   783 MBytes   688 Mbits/sec


Thanks,
Shenwei

>     Andrew

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ