[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <C3WWHZGE1V5L.54SWH1P5EARO@wkz-x280>
Date: Fri, 03 Jul 2020 12:02:04 +0200
From: "Tobias Waldekranz" <tobias@...dekranz.com>
To: "Andy Duan" <fugang.duan@....com>,
"davem@...emloft.net" <davem@...emloft.net>
Cc: "netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: RE: [EXT] [PATCH v2 net] net: ethernet: fec: prevent tx starvation
under high rx load
On Fri Jul 3, 2020 at 11:58 AM CEST, Andy Duan wrote:
> From: Tobias Waldekranz <tobias@...dekranz.com> Sent: Friday, July 3,
> 2020 3:55 PM
> > On Fri Jul 3, 2020 at 4:45 AM CEST, Andy Duan wrote:
> > > In fact, you already change the queue priority comparing before.
> > > Before: queue1 (Audio) > queue2 (video) > queue0 (best effort)
> > > Now: queue2 (video) > queue1 (Audio) > queue0 (best effort)
> >
> > Yes, thank you, I meant to ask about that. I was looking at these definitions in
> > fec.h:
> >
> > #define RCMR_CMP_1 (RCMR_CMP_CFG(0, 0) |
> > RCMR_CMP_CFG(1, 1) | \
> > RCMR_CMP_CFG(2, 2) |
> > RCMR_CMP_CFG(3, 3))
> > #define RCMR_CMP_2 (RCMR_CMP_CFG(4, 0) |
> > RCMR_CMP_CFG(5, 1) | \
> > RCMR_CMP_CFG(6, 2) |
> > RCMR_CMP_CFG(7, 3))
> >
> > I read that as PCP 0-3 being mapped to queue 1 and 4-7 to queue 2. That led
> > me to believe that the order should be 2, 1, 0. Is the driver supposed to
> > prioritize PCP 0-3 over 4-7, or have I misunderstood completely?
>
> The configuration is for RX queues.
The order in which we clean the Tx queues should not matter as there
is no budget limit in that case. I.e. even in the worst case where all
three queues are filled with transmitted buffers we will always
collect all of them in a single NAPI poll, no?. I just put them in the
same order to be consistent.
> If consider PCP 0 is high priority, that does make sense: 2 > 1 > 0.
Sorry, now I'm confused. The PCP->Queue mapping is:
0->1, 1->1, 2->1, 3->1
4->2, 5->2, 6->2, 7->2
A higher PCP value means higher priority, at least in 802.1Q/p. So the
order should be 2 > 1 > 0 _not_ because PCP 0 is the highest prio, but
because PCP 7 is, right?
> >
> > > Other logic seems fine, but you should run stress test to avoid any
> > > block issue since the driver cover more than 20 imx platforms.
> >
> > I have run stress tests and I observe that we're dequeuing about as many
> > packets from each queue when the incoming line is filled with 1/3 each of
> > untagged/tagged-pcp0/tagged-pcp7 traffic:
> >
> > root@...oy:~# ply -c "sleep 2" '
> > t:net/napi_gro_receive_entry {
> > @[data->napi_id, data->queue_mapping] = count(); }'
> > ply: active
> > ply: deactivating
> >
> > @:
> > { 66, 3 }: 165811
> > { 66, 2 }: 167733
> > { 66, 1 }: 169470
> >
> > It seems like this is due to "Receive flushing" not being enabled in the FEC. If I
> > manually enable it for queue 0, processing is restricted to only queue 1 and 2:
> >
> > root@...oy:~# devmem 0x30be01f0 32 $((1 << 3)) root@...oy:~# ply -c
> > "sleep 2" '
> > t:net/napi_gro_receive_entry {
> > @[data->napi_id, data->queue_mapping] = count(); }'
> > ply: active
> > ply: deactivating
> >
> > @:
> > { 66, 2 }: 275055
> > { 66, 3 }: 275870
> >
> > Enabling flushing on queue 1, focuses all processing on queue 2:
>
> Please don't enable flush, there have one IC issue.
> NXP latest errata doc should includes the issue, but the flush issue was
> fixed at imx8dxl platform.
> >
> > root@...oy:~# devmem 0x30be01f0 32 $((3 << 3)) root@...oy:~# ply -c
> > "sleep 2" '
> > t:net/napi_gro_receive_entry {
> > @[data->napi_id, data->queue_mapping] = count(); }'
> > ply: active
> > ply: deactivating
> >
> > @:
> > { 66, 3 }: 545442
> >
> > Changing the default QoS settings feels like a separate change, but I can
> > submit a v3 as a series if you want?
>
> I think the version is fine. No need to submit separate change.
> >
> > I do not have access to a single-queue iMX device, would it be possible for you
> > to test this change on such a device?
>
> Yes, I will do stress test on imx8 and legacy platform like imx6 with
> single-queue,
> try to avoid any block issue.
> Thank you !
Excellent, thank you!
Powered by blists - more mailing lists