[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAM_iQpWxHb8VrHUQAEgOQ0YsSVt5MMZvGvAQVuA-JGcfjc=ubg@mail.gmail.com>
Date: Tue, 26 Jun 2018 18:28:27 -0700
From: Cong Wang <xiyou.wangcong@...il.com>
To: Flavio Leitner <fbl@...hat.com>
Cc: Eric Dumazet <eric.dumazet@...il.com>,
Linux Kernel Network Developers <netdev@...r.kernel.org>,
Paolo Abeni <pabeni@...hat.com>,
David Miller <davem@...emloft.net>,
Florian Westphal <fw@...len.de>,
NetFilter <netfilter-devel@...r.kernel.org>
Subject: Re: [PATCH net-next] net: preserve sock reference when scrubbing the skb.
On Tue, Jun 26, 2018 at 5:39 PM Flavio Leitner <fbl@...hat.com> wrote:
>
> On Tue, Jun 26, 2018 at 05:29:51PM -0700, Cong Wang wrote:
> > On Tue, Jun 26, 2018 at 4:33 PM Flavio Leitner <fbl@...hat.com> wrote:
> > >
> > > It is still isolated, the sk carries the netns info and it is
> > > orphaned when it re-enters the stack.
> >
> > Then what difference does your patch make?
>
> Don't forget it is fixing two issues.
Sure. I am only talking about TSQ from the very beginning.
Let me rephrase my above question:
What difference does your patch make to TSQ?
>
> > Before your patch:
> > veth orphans skb in its xmit
> >
> > After your patch:
> > RX orphans it when re-entering stack (as you claimed, I don't know)
>
> ip_rcv, and equivalents.
ip_rcv() is L3, we enter a stack from L1. So your above claim is incorrect. :)
>
> > And for veth pair:
> > xmit from one side is RX for the other side
> > So, where is the queueing? Where is the buffer bloat? GRO list??
>
> CPU backlog.
Yeah, but this is never targeted by TSQ:
tcp_limit_output_bytes limits the number of bytes on qdisc
or device to reduce artificial RTT/cwnd and reduce bufferbloat.
which means you have to update Documentation/networking/ip-sysctl.txt
too.
Powered by blists - more mailing lists