lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 22 Aug 2022 02:12:34 -0700
From:   Peilin Ye <yepeilin.cs@...il.com>
To:     "David S. Miller" <davem@...emloft.net>,
        Eric Dumazet <edumazet@...gle.com>,
        Jakub Kicinski <kuba@...nel.org>,
        Paolo Abeni <pabeni@...hat.com>,
        Jonathan Corbet <corbet@....net>,
        Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>,
        David Ahern <dsahern@...nel.org>,
        Jamal Hadi Salim <jhs@...atatu.com>,
        Cong Wang <xiyou.wangcong@...il.com>,
        Jiri Pirko <jiri@...nulli.us>
Cc:     Peilin Ye <peilin.ye@...edance.com>, netdev@...r.kernel.org,
        linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
        Cong Wang <cong.wang@...edance.com>,
        Stephen Hemminger <stephen@...workplumber.org>,
        Dave Taht <dave.taht@...il.com>,
        Peilin Ye <yepeilin.cs@...il.com>
Subject: [PATCH RFC v2 net-next 3/5] net/sched: sch_tbf: Use Qdisc backpressure infrastructure

From: Peilin Ye <peilin.ye@...edance.com>

Recently we introduced a Qdisc backpressure infrastructure (currently
supports UDP sockets).  Use it in TBF Qdisc.

Tested with 500 Mbits/sec rate limit and SFQ inner Qdisc using 16 iperf UDP
1 Gbit/sec clients.  Before:

[  3]  0.0-15.0 sec  53.6 MBytes  30.0 Mbits/sec   0.208 ms 1190234/1228450 (97%)
[  3]  0.0-15.0 sec  54.7 MBytes  30.6 Mbits/sec   0.085 ms   955591/994593 (96%)
[  3]  0.0-15.0 sec  55.4 MBytes  31.0 Mbits/sec   0.170 ms  966364/1005868 (96%)
[  3]  0.0-15.0 sec  55.0 MBytes  30.8 Mbits/sec   0.167 ms   925083/964333 (96%)
<...>                                                         ^^^^^^^^^^^^^^^^^^^

Total throughput is 480.2 Mbits/sec and average drop rate is 96.5%.

Now enable Qdisc backpressure for UDP sockets, with
udp_backpressure_interval default to 100 milliseconds:

[  3]  0.0-15.0 sec  54.4 MBytes  30.4 Mbits/sec   0.097 ms 450/39246 (1.1%)
[  3]  0.0-15.0 sec  54.4 MBytes  30.4 Mbits/sec   0.331 ms 435/39232 (1.1%)
[  3]  0.0-15.0 sec  54.4 MBytes  30.4 Mbits/sec   0.040 ms 435/39212 (1.1%)
[  3]  0.0-15.0 sec  54.4 MBytes  30.4 Mbits/sec   0.031 ms 426/39208 (1.1%)
<...>                                                       ^^^^^^^^^^^^^^^^

Total throughput is 486.4 Mbits/sec (1.29% higher) and average drop rate
is 1.1% (98.86% lower).

However, enabling Qdisc backpressure affects fairness between flow if we
use TBF Qdisc with default bfifo inner Qdisc:

[  3]  0.0-15.0 sec  46.1 MBytes  25.8 Mbits/sec   1.102 ms 142/33048 (0.43%)
[  3]  0.0-15.0 sec  72.8 MBytes  40.7 Mbits/sec   0.476 ms 145/52081 (0.28%)
[  3]  0.0-15.0 sec  53.2 MBytes  29.7 Mbits/sec   1.047 ms 141/38086 (0.37%)
[  3]  0.0-15.0 sec  45.5 MBytes  25.4 Mbits/sec   1.600 ms 141/32573 (0.43%)
<...>                                                       ^^^^^^^^^^^^^^^^^

In the test, per-flow throughput ranged from 16.4 to 68.7 Mbits/sec.
However, total throughput was still 486.4 Mbits/sec (0.87% higher than
before), and average drop rate was 0.41% (99.58% lower than before).

Signed-off-by: Peilin Ye <peilin.ye@...edance.com>
---
 net/sched/sch_tbf.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/net/sched/sch_tbf.c b/net/sched/sch_tbf.c
index 72102277449e..cf9cc7dbf078 100644
--- a/net/sched/sch_tbf.c
+++ b/net/sched/sch_tbf.c
@@ -222,6 +222,7 @@ static int tbf_segment(struct sk_buff *skb, struct Qdisc *sch,
 		len += segs->len;
 		ret = qdisc_enqueue(segs, q->qdisc, to_free);
 		if (ret != NET_XMIT_SUCCESS) {
+			qdisc_backpressure(skb);
 			if (net_xmit_drop_count(ret))
 				qdisc_qstats_drop(sch);
 		} else {
@@ -250,6 +251,7 @@ static int tbf_enqueue(struct sk_buff *skb, struct Qdisc *sch,
 	}
 	ret = qdisc_enqueue(skb, q->qdisc, to_free);
 	if (ret != NET_XMIT_SUCCESS) {
+		qdisc_backpressure(skb);
 		if (net_xmit_drop_count(ret))
 			qdisc_qstats_drop(sch);
 		return ret;
-- 
2.20.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ