lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YyihIGsWtyQ9fS9q@pop-os.localdomain>
Date:   Mon, 19 Sep 2022 10:04:32 -0700
From:   Cong Wang <xiyou.wangcong@...il.com>
To:     Yafang Shao <laoar.shao@...il.com>
Cc:     Eric Dumazet <edumazet@...gle.com>,
        Peilin Ye <yepeilin.cs@...il.com>,
        "David S. Miller" <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>,
        Paolo Abeni <pabeni@...hat.com>,
        Jonathan Corbet <corbet@....net>,
        Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>,
        David Ahern <dsahern@...nel.org>,
        Jamal Hadi Salim <jhs@...atatu.com>,
        Jiri Pirko <jiri@...nulli.us>,
        Peilin Ye <peilin.ye@...edance.com>,
        netdev <netdev@...r.kernel.org>,
        "open list:DOCUMENTATION" <linux-doc@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Cong Wang <cong.wang@...edance.com>,
        Stephen Hemminger <stephen@...workplumber.org>,
        Dave Taht <dave.taht@...il.com>
Subject: Re: [PATCH RFC v2 net-next 0/5] net: Qdisc backpressure
 infrastructure

On Tue, Aug 30, 2022 at 10:28:01AM +0800, Yafang Shao wrote:
> On Tue, Aug 23, 2022 at 1:02 AM Eric Dumazet <edumazet@...gle.com> wrote:
> >
> > On Mon, Aug 22, 2022 at 2:10 AM Peilin Ye <yepeilin.cs@...il.com> wrote:
> > >
> > > From: Peilin Ye <peilin.ye@...edance.com>
> > >
> > > Hi all,
> > >
> > > Currently sockets (especially UDP ones) can drop a lot of packets at TC
> > > egress when rate limited by shaper Qdiscs like HTB.  This patchset series
> > > tries to solve this by introducing a Qdisc backpressure mechanism.
> > >
> > > RFC v1 [1] used a throttle & unthrottle approach, which introduced several
> > > issues, including a thundering herd problem and a socket reference count
> > > issue [2].  This RFC v2 uses a different approach to avoid those issues:
> > >
> > >   1. When a shaper Qdisc drops a packet that belongs to a local socket due
> > >      to TC egress congestion, we make part of the socket's sndbuf
> > >      temporarily unavailable, so it sends slower.
> > >
> > >   2. Later, when TC egress becomes idle again, we gradually recover the
> > >      socket's sndbuf back to normal.  Patch 2 implements this step using a
> > >      timer for UDP sockets.
> > >
> > > The thundering herd problem is avoided, since we no longer wake up all
> > > throttled sockets at the same time in qdisc_watchdog().  The socket
> > > reference count issue is also avoided, since we no longer maintain socket
> > > list on Qdisc.
> > >
> > > Performance is better than RFC v1.  There is one concern about fairness
> > > between flows for TBF Qdisc, which could be solved by using a SFQ inner
> > > Qdisc.
> > >
> > > Please see the individual patches for details and numbers.  Any comments,
> > > suggestions would be much appreciated.  Thanks!
> > >
> > > [1] https://lore.kernel.org/netdev/cover.1651800598.git.peilin.ye@bytedance.com/
> > > [2] https://lore.kernel.org/netdev/20220506133111.1d4bebf3@hermes.local/
> > >
> > > Peilin Ye (5):
> > >   net: Introduce Qdisc backpressure infrastructure
> > >   net/udp: Implement Qdisc backpressure algorithm
> > >   net/sched: sch_tbf: Use Qdisc backpressure infrastructure
> > >   net/sched: sch_htb: Use Qdisc backpressure infrastructure
> > >   net/sched: sch_cbq: Use Qdisc backpressure infrastructure
> > >
> >
> > I think the whole idea is wrong.
> >
> > Packet schedulers can be remote (offloaded, or on another box)
> >
> > The idea of going back to socket level from a packet scheduler should
> > really be a last resort.
> >
> > Issue of having UDP sockets being able to flood a network is tough, I
> > am not sure the core networking stack
> > should pretend it can solve the issue.
> >
> > Note that FQ based packet schedulers can also help already.
> 
> We encounter a similar issue when using (fq + edt-bpf) to limit UDP
> packet, because of the qdisc buffer limit.
> If the qdisc buffer limit is too small, the UDP packet will be dropped
> in the qdisc layer. But the sender doesn't know that the packets has
> been dropped, so it will continue to send packets, and thus more and
> more packets will be dropped there.  IOW, the qdisc will be a
> bottleneck before the bandwidth limit is reached.
> We workaround this issue by enlarging the buffer limit and flow_limit
> (the proper values can be calculated from net.ipv4.udp_mem and
> net.core.wmem_default).
> But obviously this is not a perfect solution, because
> net.ipv4.udp_mem or net.core.wmem_default may be changed dynamically.
> We also think about a solution to build a connection between udp
> memory and qdisc limit, but not sure if it is a good idea neither.

This is literally what this patchset does. Although this patchset does
not touch any TCP (as TCP has TSQ), I think this is a better approach
than TSQ, because TSQ has no idea about Qdisc limit.

Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ