[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230609121043.ekfvbgjiko7644t7@skbuf>
Date: Fri, 9 Jun 2023 15:10:43 +0300
From: Vladimir Oltean <vladimir.oltean@....com>
To: Jamal Hadi Salim <jhs@...atatu.com>
Cc: netdev@...r.kernel.org, "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Cong Wang <xiyou.wangcong@...il.com>,
Jiri Pirko <jiri@...nulli.us>,
Vinicius Costa Gomes <vinicius.gomes@...el.com>,
linux-kernel@...r.kernel.org, intel-wired-lan@...ts.osuosl.org,
Muhammad Husaini Zulkifli <muhammad.husaini.zulkifli@...el.com>,
Peilin Ye <yepeilin.cs@...il.com>,
Pedro Tammela <pctammela@...atatu.com>
Subject: Re: [PATCH RESEND net-next 5/5] net/sched: taprio: dump class stats
for the actual q->qdiscs[]
On Thu, Jun 08, 2023 at 02:44:46PM -0400, Jamal Hadi Salim wrote:
> Other than the refcount issue i think the approach looks reasonable to
> me. The stats before/after you are showing below though are
> interesting; are you showing a transient phase where packets are
> temporarily in the backlog. Typically the backlog is a transient phase
> which lasts a very short period. Maybe it works differently for
> taprio? I took a quick look at the code and do see to decrement the
> backlog in the dequeue, so if it is not transient then some code path
> is not being hit.
It's a fair concern. The thing is that I put very aggressive time slots
in the schedule that I'm testing with, and my kernel has a lot of
debugging stuff which bogs it down (kasan, kmemleak, lockdep, DMA API
debug etc). Not to mention that the CPU isn't the fastest to begin with.
The way taprio works is that there's a hrtimer which fires at the
expiration time of the current schedule entry and sets up the gates for
the next one. Each schedule entry has a gate for each traffic class
which determines what traffic classes are eligible for dequeue() and
which ones aren't.
The dequeue() procedure, though also invoked by the advance_schedule()
hrtimer -> __netif_schedule(), is also time-sensitive. By the time
taprio_dequeue() runs, taprio_entry_allows_tx() function might return
false when the system is so bogged down that it wasn't able to make
enough progress to dequeue() an skb in time. When that happens, there is
no mechanism, currently, to age out packets that stood too much in the
TX queues (what does "too much" mean?).
Whereas enqueue() is technically not time-sensitive, i.e. you can
enqueue whenever you want and the Qdisc will dequeue whenever it can.
Though in practice, to make this scheduling technique useful, the user
space enqueue should also be time-aware (though you can't capture this
with ping).
If I increase all my sched-entry intervals by a factor of 100, the
backlog issue goes away and the system can make forward progress.
So yeah, sorry, I didn't pay too much attention to the data I was
presenting for illustrative purposes.
> Aside: I realize you are busy - but if you get time and provide some
> sample tc command lines for testing we could help create the tests for
> you, at least the first time. The advantage of putting these tests in
> tools/testing/selftests/tc-testing/ is that there are test tools out
> there that run these tests and so regressions are easier to catch
> sooner.
Yeah, ok. The script posted in a reply on the cover letter is still what
I'm working with. The things it intends to capture are:
- attaching a custom Qdisc to one of taprio's classes doesn't fail
- attaching taprio to one of taprio's classes fails
- sending packets through one queue increases the counters (any counters)
of just that queue
All the above, replicated once for the software scheduling case and once
for the offload case. Currently netdevsim doesn't attempt to emulate
taprio offload.
Is there a way to skip tests? I may look into tdc, but I honestly don't
have time for unrelated stuff such as figuring out why my kernel isn't
configured for the other tests to pass - and it seems that once one test
fails, the others are completely skipped, see below.
Also, by which rule are the test IDs created?
root@...ian:~# cd selftests/tc-testing/
root@...ian:~/selftests/tc-testing# ./tdc.sh
considering category qdisc
-- ns/SubPlugin.__init__
Test 0582: Create QFQ with default setting
Test c9a3: Create QFQ with class weight setting
Test d364: Test QFQ with max class weight setting
Test 8452: Create QFQ with class maxpkt setting
Test 22df: Test QFQ class maxpkt setting lower bound
Test 92ee: Test QFQ class maxpkt setting upper bound
Test d920: Create QFQ with multiple class setting
Test 0548: Delete QFQ with handle
Test 5901: Show QFQ class
Test 0385: Create DRR with default setting
Test 2375: Delete DRR with handle
Test 3092: Show DRR class
Test 3460: Create CBQ with default setting
exit: 2
exit: 0
Error: Specified qdisc kind is unknown.
-----> teardown stage *** Could not execute: "$TC qdisc del dev $DUMMY handle 1: root"
-----> teardown stage *** Error message: "Error: Invalid handle.
"
returncode 2; expected [0]
-----> teardown stage *** Aborting test run.
<_io.BufferedReader name=3> *** stdout ***
<_io.BufferedReader name=5> *** stderr ***
"-----> teardown stage" did not complete successfully
Exception <class '__main__.PluginMgrTestFail'> ('teardown', 'Error: Specified qdisc kind is unknown.\n', '"-----> teardown stage" did not complete successfully') (caught in test_runner, running test 14 3460 Create CBQ with default setting stage teardown)
---------------
traceback
File "/root/selftests/tc-testing/./tdc.py", line 495, in test_runner
res = run_one_test(pm, args, index, tidx)
File "/root/selftests/tc-testing/./tdc.py", line 434, in run_one_test
prepare_env(args, pm, 'teardown', '-----> teardown stage', tidx['teardown'], procout)
File "/root/selftests/tc-testing/./tdc.py", line 245, in prepare_env
raise PluginMgrTestFail(
---------------
accumulated output for this test:
Error: Specified qdisc kind is unknown.
---------------
All test results:
1..336
ok 1 0582 - Create QFQ with default setting
ok 2 c9a3 - Create QFQ with class weight setting
ok 3 d364 - Test QFQ with max class weight setting
ok 4 8452 - Create QFQ with class maxpkt setting
ok 5 22df - Test QFQ class maxpkt setting lower bound
ok 6 92ee - Test QFQ class maxpkt setting upper bound
ok 7 d920 - Create QFQ with multiple class setting
ok 8 0548 - Delete QFQ with handle
ok 9 5901 - Show QFQ class
ok 10 0385 - Create DRR with default setting
ok 11 2375 - Delete DRR with handle
ok 12 3092 - Show DRR class
ok 13 3460 - Create CBQ with default setting # skipped - "-----> teardown stage" did not complete successfully
ok 14 0592 - Create CBQ with mpu # skipped - skipped - previous teardown failed 14 3460
ok 15 4684 - Create CBQ with valid cell num # skipped - skipped - previous teardown failed 14 3460
ok 16 4345 - Create CBQ with invalid cell num # skipped - skipped - previous teardown failed 14 3460
ok 17 4525 - Create CBQ with valid ewma # skipped - skipped - previous teardown failed 14 3460
ok 18 6784 - Create CBQ with invalid ewma # skipped - skipped - previous teardown failed 14 3460
ok 19 5468 - Delete CBQ with handle # skipped - skipped - previous teardown failed 14 3460
ok 20 492a - Show CBQ class # skipped - skipped - previous teardown failed 14 3460
ok 21 9903 - Add mqprio Qdisc to multi-queue device (8 queues) # skipped - skipped - previous teardown failed 14 3460
ok 22 453a - Delete nonexistent mqprio Qdisc # skipped - skipped - previous teardown failed 14 3460
ok 23 5292 - Delete mqprio Qdisc twice # skipped - skipped - previous teardown failed 14 3460
ok 24 45a9 - Add mqprio Qdisc to single-queue device # skipped - skipped - previous teardown failed 14 3460
ok 25 2ba9 - Show mqprio class # skipped - skipped - previous teardown failed 14 3460
ok 26 4812 - Create HHF with default setting # skipped - skipped - previous teardown failed 14 3460
ok 27 8a92 - Create HHF with limit setting # skipped - skipped - previous teardown failed 14 3460
ok 28 3491 - Create HHF with quantum setting # skipped - skipped - previous teardown failed 14 3460
(...)
Powered by blists - more mailing lists