lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAK6E8=dx0Yt7y2g6JBi4JWM5Z8ARQPSsAYioEOPS5DiGkTU1yw@mail.gmail.com>
Date:	Wed, 29 Jun 2016 00:06:16 -0700
From:	Yuchung Cheng <ycheng@...gle.com>
To:	Daniel Metz <dmetz@...um.de>, Hagen Paul Pfeifer <hagen@...u.net>,
	Daniel Metz <Daniel.Metz@...de-schwarz.com>
Cc:	netdev <netdev@...r.kernel.org>,
	Eric Dumazet <edumazet@...gle.com>,
	Neal Cardwell <ncardwell@...gle.com>,
	David Miller <davem@...emloft.net>
Subject: Re: [PATCH net-next v3] tcp: use RFC6298 compliant TCP RTO calculation

On Tue, Jun 21, 2016 at 10:53 PM, Yuchung Cheng <ycheng@...gle.com> wrote:
>
> On Fri, Jun 17, 2016 at 11:56 AM, Yuchung Cheng <ycheng@...gle.com> wrote:
> >
> > On Fri, Jun 17, 2016 at 11:32 AM, David Miller <davem@...emloft.net> wrote:
> > >
> > > From: Daniel Metz <dmetz@...um.de>
> > > Date: Wed, 15 Jun 2016 20:00:03 +0200
> > >
> > > > This patch adjusts Linux RTO calculation to be RFC6298 Standard
> > > > compliant. MinRTO is no longer added to the computed RTO, RTO damping
> > > > and overestimation are decreased.
> > >  ...
> > >
> > > Yuchung, I assume I am waiting for you to do the testing you said
> > > you would do for this patch, right?
> > Yes I spent the last two days resolving some unrelated glitches to
> > start my testing on Web servers. I should be able to get some results
> > over the weekend.
> >
> > I will test
> > 0) current Linux
> > 1) this patch
> > 2) RFC6298 with min_RTO=1sec
> > 3) RFC6298 with minimum RTTVAR of 200ms (so it is more like current
> > Linux style of min RTO which only applies to RTTVAR)
> >
> > and collect the TCP latency (how long to send an HTTP response) and
> > (spurious) timeout & retransmission stats.
> >
> Thanks for the patience. I've collected data from some Google Web
> servers. They serve both a mix of US and SouthAm users using
> HTTP1 and HTTP2. The traffic is Web browsing (e.g., search, maps,
> gmails, etc but not Youtube videos). The mean RTT is about 100ms.
>
> The user connections were split into 4 groups of different TCP RTO
> configs. Each group has many millions of connections but the
> size variation among groups is well under 1%.
>
> B: baseline Linux
> D: this patch
> R: change RTTYAR averaging as in D, but bound RTO to 1sec per RFC6298
> Y: change RTTVAR averaging as in D, but bound RTTVAR to 200ms instead (like B)
>
> For mean TCP latency of HTTP responses (first byte sent to last byte
> acked), B < R < Y < D. But the differences are so insignificant (<1%).
> The median, 95pctl, and 99pctl has similar indifference. In summary
> there's hardly visible impact on latency. I also look at only response
> less than 4KB but do not see a different picture.
>
> The main difference is the retransmission rate where R =~ Y < B =~D.
> R and Y are ~20% lower than B and D. Parsing the SNMP stats reveal
> more interesting details. The table shows the deltas in percentage to
> the baseline B.
>
>                 D      R     Y
> ------------------------------
> Timeout      +12%   -16%  -16%
> TailLossProb +28%    -7%   -7%
> DSACK_rcvd   +37%    -7%   -7%
> Cwnd-undo    +16%   -29%  -29%
>
> RTO change affects TLP because TLP will use the min of RTO and TLP
> timer value to arm the probe timer.
>
> The stats indicate that the main culprit of spurious timeouts / rtx is
> the RTO lower-bound. But they also show the RFC RTTVAR averaging is as
> good as current Linux approach.
>
> Given that I would recommend we revise this patch to use the RFC
> averaging but keep existing lower-bound (of RTTVAR to 200ms). We can
> further experiment the lower-bound and change that in a separate
> patch.
Hi I have some update.

I instrumented the kernel to capture the time spent in recovery
(attached). The latency measurement starts when TCP goes into
recovery, triggered by either ACKs or RTOs.  The start time is the
(original) sent time of the first unacked packet. The end time is when
the ACK covers the highest sent sequence when recovery started. The
total latency in usec and count are recorded in MIB_TCPRECOVLAT and
MIB_TCPRECOVCNT. If the connection times out or closes while the
sender was still in recovery, the total latency and count are stored
in MIB_TCPRECOVLAT2 and MIB_TCPRECOVCNT2. This second bucket is to
capture long recovery that led to eventual connection aborts.

Since network stat is usually power distribution, the mean of such
distribution is gonna be dominated by the tail. but the new metrics
still shows very interesting impact of different RTOs. Using the same
table format like my previous email. This table shows the difference
in percentage to the baseline.

                                      B        D      R     Y
------------------------------------------------------------------------------
mean TCPRecovLat      3s      -7%   +39% +38%
mean TCPRecovLat2    52s     +1%   -11%  -11%

The new metrics show that lower-bounding RTO to 200ms (D) indeed
lowers the latency. But by my previous analysis, D has a lot more
spurious rtx and TLPs (which the collateral damage on latency is not
captured by these metrics). And note that TLP timer uses the min of
RTO and TLP timeout, so TLP fires 28% more often in (D). Therefore the
latency may be mainly benefited from a faster TLP timer. Nevertheless
the significant impacts on recovery latency do not show up on the
response latency we measured earlier. My conjecture is that only a
small fraction of flows experience losses so even a 40% increase on
average on loss recovery does not move the needle, or the latency
reduction was cancelled by the latency increase of spurious timeouts
and CC reactions.

hmm it almost seen that the current code is the right balance. We may
need more investigation.

Daniel and Hagen, could you try my instrumentation patch on your testbed?

Download attachment "0001-tcp-instrument-recovery-latency.patch" of type "application/octet-stream" (6650 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ