[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iJsOHK1qgudpfFW9poC4NRBZiob-ynTOuRBkuJTw6FaJw@mail.gmail.com>
Date: Mon, 22 Aug 2022 09:22:39 -0700
From: Eric Dumazet <edumazet@...gle.com>
To: Peilin Ye <yepeilin.cs@...il.com>
Cc: "David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Jonathan Corbet <corbet@....net>,
Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>,
David Ahern <dsahern@...nel.org>,
Jamal Hadi Salim <jhs@...atatu.com>,
Cong Wang <xiyou.wangcong@...il.com>,
Jiri Pirko <jiri@...nulli.us>,
Peilin Ye <peilin.ye@...edance.com>,
netdev <netdev@...r.kernel.org>,
"open list:DOCUMENTATION" <linux-doc@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Cong Wang <cong.wang@...edance.com>,
Stephen Hemminger <stephen@...workplumber.org>,
Dave Taht <dave.taht@...il.com>
Subject: Re: [PATCH RFC v2 net-next 0/5] net: Qdisc backpressure infrastructure
On Mon, Aug 22, 2022 at 2:10 AM Peilin Ye <yepeilin.cs@...il.com> wrote:
>
> From: Peilin Ye <peilin.ye@...edance.com>
>
> Hi all,
>
> Currently sockets (especially UDP ones) can drop a lot of packets at TC
> egress when rate limited by shaper Qdiscs like HTB. This patchset series
> tries to solve this by introducing a Qdisc backpressure mechanism.
>
> RFC v1 [1] used a throttle & unthrottle approach, which introduced several
> issues, including a thundering herd problem and a socket reference count
> issue [2]. This RFC v2 uses a different approach to avoid those issues:
>
> 1. When a shaper Qdisc drops a packet that belongs to a local socket due
> to TC egress congestion, we make part of the socket's sndbuf
> temporarily unavailable, so it sends slower.
>
> 2. Later, when TC egress becomes idle again, we gradually recover the
> socket's sndbuf back to normal. Patch 2 implements this step using a
> timer for UDP sockets.
>
> The thundering herd problem is avoided, since we no longer wake up all
> throttled sockets at the same time in qdisc_watchdog(). The socket
> reference count issue is also avoided, since we no longer maintain socket
> list on Qdisc.
>
> Performance is better than RFC v1. There is one concern about fairness
> between flows for TBF Qdisc, which could be solved by using a SFQ inner
> Qdisc.
>
> Please see the individual patches for details and numbers. Any comments,
> suggestions would be much appreciated. Thanks!
>
> [1] https://lore.kernel.org/netdev/cover.1651800598.git.peilin.ye@bytedance.com/
> [2] https://lore.kernel.org/netdev/20220506133111.1d4bebf3@hermes.local/
>
> Peilin Ye (5):
> net: Introduce Qdisc backpressure infrastructure
> net/udp: Implement Qdisc backpressure algorithm
> net/sched: sch_tbf: Use Qdisc backpressure infrastructure
> net/sched: sch_htb: Use Qdisc backpressure infrastructure
> net/sched: sch_cbq: Use Qdisc backpressure infrastructure
>
I think the whole idea is wrong.
Packet schedulers can be remote (offloaded, or on another box)
The idea of going back to socket level from a packet scheduler should
really be a last resort.
Issue of having UDP sockets being able to flood a network is tough, I
am not sure the core networking stack
should pretend it can solve the issue.
Note that FQ based packet schedulers can also help already.
Powered by blists - more mailing lists