[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170914104905.0145e489@plumbers-lap.home.lan>
Date: Thu, 14 Sep 2017 10:49:05 -0700
From: Stephen Hemminger <stephen@...workplumber.org>
To: David Miller <davem@...emloft.net>
Cc: kys@...rosoft.com, haiyangz@...rosoft.com, netdev@...r.kernel.org,
devel@...uxdriverproject.org, sthemmin@...rosoft.com
Subject: Re: [PATCH net] netvsc: increase default receive buffer size
On Thu, 14 Sep 2017 10:02:03 -0700 (PDT)
David Miller <davem@...emloft.net> wrote:
> From: Stephen Hemminger <stephen@...workplumber.org>
> Date: Thu, 14 Sep 2017 09:31:07 -0700
>
> > The default receive buffer size was reduced by recent change
> > to a value which was appropriate for 10G and Windows Server 2016.
> > But the value is too small for full performance with 40G on Azure.
> > Increase the default back to maximum supported by host.
> >
> > Fixes: 8b5327975ae1 ("netvsc: allow controlling send/recv buffer size")
> > Signed-off-by: Stephen Hemminger <sthemmin@...rosoft.com>
>
> What other side effects are there to making this buffer so large?
>
> Just curious...
It increase latency and exercises bufferbloat avoidance on TCP.
The problem was the smaller buffer caused regressions in UDP
benchmarks on 40G Azure. One could argue that this is not a reasonable
benchmark but people run it. Apparently, Windows already went
the same thing and uses an even bigger buffer.
Longer term there will be more internal discussion with different
teams about what the receive latency and buffering needs to be.
Also, the issue goes away when doing accelerated networking (SR-IOV)
is more widely used.
Powered by blists - more mailing lists