[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YzdI4mDXCKuI/58N@lunn.ch>
Date: Fri, 30 Sep 2022 21:52:02 +0200
From: Andrew Lunn <andrew@...n.ch>
To: Shenwei Wang <shenwei.wang@....com>
Cc: "David S . Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Jesper Dangaard Brouer <hawk@...nel.org>,
John Fastabend <john.fastabend@...il.com>,
Wei Fang <wei.fang@....com>, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, imx@...ts.linux.dev
Subject: Re: [PATCH 1/1] net: fec: using page pool to manage RX buffers
On Fri, Sep 30, 2022 at 02:37:51PM -0500, Shenwei Wang wrote:
> This patch optimizes the RX buffer management by using the page
> pool. The purpose for this change is to prepare for the following
> XDP support. The current driver uses one frame per page for easy
> management.
>
> The following are the comparing result between page pool implementation
> and the original implementation (non page pool).
>
> --- Page Pool implementation ----
>
> shenwei@...0:~$ iperf -c 10.81.16.245 -w 2m -i 1
> ------------------------------------------------------------
> Client connecting to 10.81.16.245, TCP port 5001
> TCP window size: 416 KByte (WARNING: requested 1.91 MByte)
> ------------------------------------------------------------
> [ 1] local 10.81.17.20 port 43204 connected with 10.81.16.245 port 5001
> [ ID] Interval Transfer Bandwidth
> [ 1] 0.0000-1.0000 sec 111 MBytes 933 Mbits/sec
> [ 1] 1.0000-2.0000 sec 111 MBytes 934 Mbits/sec
> [ 1] 2.0000-3.0000 sec 112 MBytes 935 Mbits/sec
> [ 1] 3.0000-4.0000 sec 111 MBytes 933 Mbits/sec
> [ 1] 4.0000-5.0000 sec 111 MBytes 934 Mbits/sec
> [ 1] 5.0000-6.0000 sec 111 MBytes 933 Mbits/sec
> [ 1] 6.0000-7.0000 sec 111 MBytes 931 Mbits/sec
> [ 1] 7.0000-8.0000 sec 112 MBytes 935 Mbits/sec
> [ 1] 8.0000-9.0000 sec 111 MBytes 933 Mbits/sec
> [ 1] 9.0000-10.0000 sec 112 MBytes 935 Mbits/sec
> [ 1] 0.0000-10.0077 sec 1.09 GBytes 933 Mbits/sec
>
> --- Non Page Pool implementation ----
>
> shenwei@...0:~$ iperf -c 10.81.16.245 -w 2m -i 1
> ------------------------------------------------------------
> Client connecting to 10.81.16.245, TCP port 5001
> TCP window size: 416 KByte (WARNING: requested 1.91 MByte)
> ------------------------------------------------------------
> [ 1] local 10.81.17.20 port 49154 connected with 10.81.16.245 port 5001
> [ ID] Interval Transfer Bandwidth
> [ 1] 0.0000-1.0000 sec 104 MBytes 868 Mbits/sec
> [ 1] 1.0000-2.0000 sec 105 MBytes 878 Mbits/sec
> [ 1] 2.0000-3.0000 sec 105 MBytes 881 Mbits/sec
> [ 1] 3.0000-4.0000 sec 105 MBytes 879 Mbits/sec
> [ 1] 4.0000-5.0000 sec 105 MBytes 878 Mbits/sec
> [ 1] 5.0000-6.0000 sec 105 MBytes 878 Mbits/sec
> [ 1] 6.0000-7.0000 sec 104 MBytes 875 Mbits/sec
> [ 1] 7.0000-8.0000 sec 104 MBytes 875 Mbits/sec
> [ 1] 8.0000-9.0000 sec 104 MBytes 873 Mbits/sec
> [ 1] 9.0000-10.0000 sec 104 MBytes 875 Mbits/sec
> [ 1] 0.0000-10.0073 sec 1.02 GBytes 875 Mbits/sec
What SoC? As i keep saying, the FEC is used in a lot of different
SoCs, and you need to show this does not cause any regressions in the
older SoCs. There are probably a lot more imx5 and imx6 devices out in
the wild than imx8, which is what i guess you are testing on. Mainline
needs to work well on them all, even if NXP no longer cares about the
older Socs.
Andrew
Powered by blists - more mailing lists