[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87k0fn2pht.fsf@intel.com>
Date: Wed, 29 Dec 2021 16:13:50 -0300
From: Vinicius Costa Gomes <vinicius.gomes@...el.com>
To: xiangxia.m.yue@...il.com, netdev@...r.kernel.org
Cc: Tonghao Zhang <xiangxia.m.yue@...il.com>,
Jamal Hadi Salim <jhs@...atatu.com>,
Cong Wang <xiyou.wangcong@...il.com>,
Jiri Pirko <jiri@...nulli.us>,
"David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Jonathan Lemon <jonathan.lemon@...il.com>,
Eric Dumazet <edumazet@...gle.com>,
Alexander Lobakin <alobakin@...me>,
Paolo Abeni <pabeni@...hat.com>,
Talal Ahmad <talalahmad@...gle.com>,
Kevin Hao <haokexin@...il.com>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>,
Kees Cook <keescook@...omium.org>,
Kumar Kartikeya Dwivedi <memxor@...il.com>,
Antoine Tenart <atenart@...nel.org>,
Wei Wang <weiwan@...gle.com>, Arnd Bergmann <arnd@...db.de>
Subject: Re: [net-next v5 1/2] net: sched: use queue_mapping to pick tx queue
xiangxia.m.yue@...il.com writes:
> From: Tonghao Zhang <xiangxia.m.yue@...il.com>
>
> This patch fixes issue:
> * If we install tc filters with act_skbedit in clsact hook.
> It doesn't work, because netdev_core_pick_tx() overwrites
> queue_mapping.
>
> $ tc filter ... action skbedit queue_mapping 1
>
> And this patch is useful:
> * We can use FQ + EDT to implement efficient policies. Tx queues
> are picked by xps, ndo_select_queue of netdev driver, or skb hash
> in netdev_core_pick_tx(). In fact, the netdev driver, and skb
> hash are _not_ under control. xps uses the CPUs map to select Tx
> queues, but we can't figure out which task_struct of pod/containter
> running on this cpu in most case. We can use clsact filters to classify
> one pod/container traffic to one Tx queue. Why ?
>
> In containter networking environment, there are two kinds of pod/
> containter/net-namespace. One kind (e.g. P1, P2), the high throughput
> is key in these applications. But avoid running out of network resource,
> the outbound traffic of these pods is limited, using or sharing one
> dedicated Tx queues assigned HTB/TBF/FQ Qdisc. Other kind of pods
> (e.g. Pn), the low latency of data access is key. And the traffic is not
> limited. Pods use or share other dedicated Tx queues assigned FIFO Qdisc.
> This choice provides two benefits. First, contention on the HTB/FQ Qdisc
> lock is significantly reduced since fewer CPUs contend for the same queue.
> More importantly, Qdisc contention can be eliminated completely if each
> CPU has its own FIFO Qdisc for the second kind of pods.
>
> There must be a mechanism in place to support classifying traffic based on
> pods/container to different Tx queues. Note that clsact is outside of Qdisc
> while Qdisc can run a classifier to select a sub-queue under the
> lock.
One alternative, I don't know if it would work for you, it to use the
net_prio cgroup + mqprio.
Something like this:
* create the cgroup
$ mkdir -p /sys/fs/cgroup/net_prio/<CGROUP_NAME>
* assign priorities to the cgroup (per interface)
$ echo "<IFACE> <PRIO>" >> /sys/fs/cgroup/net_prio/<CGROUP_NAME>/net_prio.ifpriomap"
* use the cgroup in applications that do not set SO_PRIORITY
$ cgexec -g net_prio:<CGROUP_NAME> <application>
* configure mqprio
$ tc qdisc replace dev $IFACE parent root handle 100 mqprio \
num_tc 3 \
map 2 2 1 0 2 2 2 2 2 2 2 2 2 2 2 2 \
queues 1@0 1@1 2@2 \
hw 0
This would map all traffic with SO_PRIORITY 3 to TX queue 0, for example.
But I agree that skbedit's queue_mapping not working is unexpected and
should be fixed.
Cheers,
--
Vinicius
Powered by blists - more mailing lists