[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20211210193556.1349090-1-yannick.vignon@oss.nxp.com>
Date: Fri, 10 Dec 2021 20:35:52 +0100
From: Yannick Vignon <yannick.vignon@....nxp.com>
To: Giuseppe Cavallaro <peppe.cavallaro@...com>,
Alexandre Torgue <alexandre.torgue@...com>,
netdev@...r.kernel.org, Ong Boon Leong <boon.leong.ong@...el.com>,
"David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Jose Abreu <joabreu@...opsys.com>,
Eric Dumazet <edumazet@...gle.com>,
Wei Wang <weiwan@...gle.com>,
Alexander Lobakin <alexandr.lobakin@...el.com>,
Vladimir Oltean <olteanv@...il.com>,
Xiaoliang Yang <xiaoliang.yang_1@....com>, mingkai.hu@....com,
Joakim Zhang <qiangqing.zhang@....com>,
sebastien.laveze@....com
Subject: [RFC net-next 0/4] net: Improving network scheduling latencies
I am working on an application to showcase TSN use cases. That
application wakes up periodically, reads packet(s) from the network,
sends packet(s), then goes back to sleep. Endpoints are synchronized
through gPTP, and a 802.1Qbv schedule is in place to ensure packets are
sent at a fixed time. Right now, we achieve an overal period of 2ms,
which results in 500µs between the time the application is supposed to
wake up to the time the last packet is sent. We use an NXP kernel 5.10.x
with PREEMPT_RT patches.
I've been focusing lately on reducing the period, to see how close a
Linux-based system could get to a micro-controller with a "real-time"
OS. I've been able to achieve 500µs overall (125µs for the app itself)
by using AF_XDP sockets, but this also led to identifying several
sources of "scheduling" latencies, which I've tried to resolve with the
patches attached. The main culprit so far has been
local_bh_disable/local_bh_enable sections running in lower prio tasks,
requiring costly context switches along with priority inheritance. I've
removed the offending sections without significant problems so far, but
I'm not entirely clear though on the reason local_disable/enable were
used in those places: is it some simple oversight, an excess of caution,
or am I missing something more fundamental in the way those locks are
used?
Thanks,
Yannick
Yannick Vignon (4)
net: stmmac: remove unnecessary locking around PTP clock reads
net: stmmac: do not use __netif_tx_lock_bh when in NAPI threaded mode
net: stmmac: move to threaded IRQ
net: napi threaded: remove unnecessary locking
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 44
+++++++++++++++++++++++++-------------------
drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c | 2 --
net/core/dev.c | 2 --
3 files changed, 25 insertions(+), 23 deletions(-)
Powered by blists - more mailing lists