[<prev] [next>] [day] [month] [year] [list]
Message-ID: <ace06e91-ddd5-1bf4-5845-6ab8ae4e8f55@oss.nxp.com>
Date: Fri, 12 Mar 2021 18:40:53 +0100
From: Yannick Vignon <yannick.vignon@....nxp.com>
To: netdev@...r.kernel.org
Subject: Offload taprio qdisc issue with concurrent best-effort traffic
Hi,
For multiqueue devices, the default mq configuration sets up a separate
qdisc for each device queue. If I'm not mistaken, this is done in the
mq_attach callback, by calling the dev_graft_qdisc function which
initializes dev_queue->qdisc.
However, with the taprio qdisc, this is not done and all the queues
point to that same taprio qdisc, even though each taprio "class" can
still hold a qdisc of its own. This works and might even be required for
no-offload taprio, but at least with offload taprio, it has an
undesirable side effect: because the whole qdisc is run when a packet
has to be sent, it allows packets in a best-effort class to be processed
in the context and on the core running a high priority task. At that
point, even though locking is per netdev_queue, the high priority task
can end up blocked because the qdisc is waiting for the best-effort lock
to be released by another core.
I have tried adding an attach callback for the taprio qdisc to make it
behave the same as other multiqueue qdiscs, and it seems that my problem
is solved. The latencies observed by my high priority task are back to
normal, even with a lot of best-effort traffic being sent from the
system at the same time.
Is my understanding of the problem correct, and is this the right way to
fix it? I can prepare a patch in that case, but I wanted some initial
feedback first as I don't think I've grasped all the details in the net
scheduling code (if anything, I suspect my changes would break a
non-offload taprio qdisc).
Thanks,
Yannick
Powered by blists - more mailing lists