[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20211103143208.41282-1-xiangxia.m.yue@gmail.com>
Date: Wed, 3 Nov 2021 22:32:08 +0800
From: xiangxia.m.yue@...il.com
To: netdev@...r.kernel.org
Cc: Tonghao Zhang <xiangxia.m.yue@...il.com>,
Willem de Bruijn <willemb@...gle.com>,
Cong Wang <xiyou.wangcong@...il.com>,
Jakub Kicinski <kuba@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>
Subject: [PATCH v2] net: sched: check tc_skip_classify as far as possible
From: Tonghao Zhang <xiangxia.m.yue@...il.com>
We look up and then check tc_skip_classify flag in net
sched layer, even though skb don't want to be classified.
That case may consume a lot of cpu cycles.
Install the rules as below:
$ for id in $(seq 1 100); do
$ tc filter add ... egress prio $id ... action mirred egress redirect dev ifb0
$ done
netperf:
$ taskset -c 1 netperf -t TCP_RR -H ip -- -r 32,32
$ taskset -c 1 netperf -t TCP_STREAM -H ip -- -m 32
Before: 10662.33 tps, 108.95 Mbit/s
After: 12434.48 tps, 145.89 Mbit/s
For TCP_RR, there are 16.6% improvement, TCP_STREAM 33.9%.
Cc: Willem de Bruijn <willemb@...gle.com>
Cc: Cong Wang <xiyou.wangcong@...il.com>
Cc: Jakub Kicinski <kuba@...nel.org>
Cc: Daniel Borkmann <daniel@...earbox.net>
Signed-off-by: Tonghao Zhang <xiangxia.m.yue@...il.com>
---
v2: don't delete skb_skip_tc_classify in act_api
---
net/core/dev.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/net/core/dev.c b/net/core/dev.c
index edeb811c454e..fc29a429e9ad 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -3940,6 +3940,9 @@ sch_handle_egress(struct sk_buff *skb, int *ret, struct net_device *dev)
if (!miniq)
return skb;
+ if (skb_skip_tc_classify(skb))
+ return skb;
+
/* qdisc_skb_cb(skb)->pkt_len was already set by the caller. */
qdisc_skb_cb(skb)->mru = 0;
qdisc_skb_cb(skb)->post_ct = false;
--
2.27.0
Powered by blists - more mailing lists