[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20211130072308.76cc711c@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
Date: Tue, 30 Nov 2021 07:23:08 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: Menglong Dong <menglong8.dong@...il.com>
Cc: David Miller <davem@...emloft.net>,
Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>,
dsahern@...nel.org, Eric Dumazet <edumazet@...gle.com>,
Menglong Dong <imagedong@...cent.com>,
Yuchung Cheng <ycheng@...gle.com>, kuniyu@...zon.co.jp,
LKML <linux-kernel@...r.kernel.org>,
netdev <netdev@...r.kernel.org>
Subject: Re: [PATCH v2 net-next] net: snmp: add statistics for tcp small
queue check
On Tue, 30 Nov 2021 22:36:59 +0800 Menglong Dong wrote:
> On Mon, Nov 29, 2021 at 11:57 PM Jakub Kicinski <kuba@...nel.org> wrote:
> >
> > On Sun, 28 Nov 2021 14:01:02 +0800 menglong8.dong@...il.com wrote:
> > > Once tcp small queue check failed in tcp_small_queue_check(), the
> > > throughput of tcp will be limited, and it's hard to distinguish
> > > whether it is out of tcp congestion control.
> > >
> > > Add statistics of LINUX_MIB_TCPSMALLQUEUEFAILURE for this scene.
> >
> > Isn't this going to trigger all the time and alarm users because of the
> > "Failure" in the TCPSmallQueueFailure name? Isn't it perfectly fine
> > for TCP to bake full TSQ amount of data and have it paced out onto the
> > wire? What's your link speed?
>
> Well, it's a little complex. In my case, there is a guest in kvm, and virtio_net
> is used with napi_tx enabled.
>
> With napi_tx enabled, skb won't be orphaned after it is passed to virtio_net,
> until it is released. The point is that the sending interrupt of
> virtio_net will be
> turned off and the skb can't be released until the next net_rx interrupt comes.
> So, wmem_alloc can't decrease on time, and the bandwidth is limited. When
> this happens, the bandwidth can decrease from 500M to 10M.
>
> In fact, this issue of uapi_tx is fixed in this commit:
> https://lore.kernel.org/lkml/20210719144949.935298466@linuxfoundation.org/
>
> I added this statistics to monitor the sending failure (may be called
> sending delay)
> caused by qdisc and net_device. When something happen, maybe users can
> raise ‘/proc/sys/net/ipv4/tcp_pacing_ss_ratio’ to get better bandwidth.
Sounds very second-order and particular to a buggy driver :/
Let's see what Eric says but I vote revert.
Powered by blists - more mailing lists