[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <cover.1661158173.git.peilin.ye@bytedance.com>
Date: Mon, 22 Aug 2022 02:10:17 -0700
From: Peilin Ye <yepeilin.cs@...il.com>
To: "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Jonathan Corbet <corbet@....net>,
Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>,
David Ahern <dsahern@...nel.org>,
Jamal Hadi Salim <jhs@...atatu.com>,
Cong Wang <xiyou.wangcong@...il.com>,
Jiri Pirko <jiri@...nulli.us>
Cc: Peilin Ye <peilin.ye@...edance.com>, netdev@...r.kernel.org,
linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
Cong Wang <cong.wang@...edance.com>,
Stephen Hemminger <stephen@...workplumber.org>,
Dave Taht <dave.taht@...il.com>,
Peilin Ye <yepeilin.cs@...il.com>
Subject: [PATCH RFC v2 net-next 0/5] net: Qdisc backpressure infrastructure
From: Peilin Ye <peilin.ye@...edance.com>
Hi all,
Currently sockets (especially UDP ones) can drop a lot of packets at TC
egress when rate limited by shaper Qdiscs like HTB. This patchset series
tries to solve this by introducing a Qdisc backpressure mechanism.
RFC v1 [1] used a throttle & unthrottle approach, which introduced several
issues, including a thundering herd problem and a socket reference count
issue [2]. This RFC v2 uses a different approach to avoid those issues:
1. When a shaper Qdisc drops a packet that belongs to a local socket due
to TC egress congestion, we make part of the socket's sndbuf
temporarily unavailable, so it sends slower.
2. Later, when TC egress becomes idle again, we gradually recover the
socket's sndbuf back to normal. Patch 2 implements this step using a
timer for UDP sockets.
The thundering herd problem is avoided, since we no longer wake up all
throttled sockets at the same time in qdisc_watchdog(). The socket
reference count issue is also avoided, since we no longer maintain socket
list on Qdisc.
Performance is better than RFC v1. There is one concern about fairness
between flows for TBF Qdisc, which could be solved by using a SFQ inner
Qdisc.
Please see the individual patches for details and numbers. Any comments,
suggestions would be much appreciated. Thanks!
[1] https://lore.kernel.org/netdev/cover.1651800598.git.peilin.ye@bytedance.com/
[2] https://lore.kernel.org/netdev/20220506133111.1d4bebf3@hermes.local/
Peilin Ye (5):
net: Introduce Qdisc backpressure infrastructure
net/udp: Implement Qdisc backpressure algorithm
net/sched: sch_tbf: Use Qdisc backpressure infrastructure
net/sched: sch_htb: Use Qdisc backpressure infrastructure
net/sched: sch_cbq: Use Qdisc backpressure infrastructure
Documentation/networking/ip-sysctl.rst | 11 ++++
include/linux/udp.h | 3 ++
include/net/netns/ipv4.h | 1 +
include/net/sch_generic.h | 11 ++++
include/net/sock.h | 21 ++++++++
include/net/udp.h | 1 +
net/core/sock.c | 5 +-
net/ipv4/sysctl_net_ipv4.c | 7 +++
net/ipv4/udp.c | 69 +++++++++++++++++++++++++-
net/ipv6/udp.c | 2 +-
net/sched/sch_cbq.c | 1 +
net/sched/sch_htb.c | 2 +
net/sched/sch_tbf.c | 2 +
13 files changed, 132 insertions(+), 4 deletions(-)
--
2.20.1
Powered by blists - more mailing lists