lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Sun, 05 Nov 2017 23:16:00 +0900 (KST)
From:   David Miller <davem@...emloft.net>
To:     priyarjha@...gle.com
Cc:     netdev@...r.kernel.org, ycheng@...gle.com, ncardwell@...gle.com
Subject: Re: [PATCH net-next] tcp: higher throughput under reordering with
 adaptive RACK reordering wnd

From: Priyaranjan Jha <priyarjha@...gle.com>
Date: Fri,  3 Nov 2017 16:38:48 -0700

> Currently TCP RACK loss detection does not work well if packets are
> being reordered beyond its static reordering window (min_rtt/4).Under
> such reordering it may falsely trigger loss recoveries and reduce TCP
> throughput significantly.
> 
> This patch improves that by increasing and reducing the reordering
> window based on DSACK, which is now supported in major TCP implementations.
> It makes RACK's reo_wnd adaptive based on DSACK and no. of recoveries.
> 
> - If DSACK is received, increment reo_wnd by min_rtt/4 (upper bounded
>   by srtt), since there is possibility that spurious retransmission was
>   due to reordering delay longer than reo_wnd.
> 
> - Persist the current reo_wnd value for TCP_RACK_RECOVERY_THRESH (16)
>   no. of successful recoveries (accounts for full DSACK-based loss
>   recovery undo). After that, reset it to default (min_rtt/4).
> 
> - At max, reo_wnd is incremented only once per rtt. So that the new
>   DSACK on which we are reacting, is due to the spurious retx (approx)
>   after the reo_wnd has been updated last time.
> 
> - reo_wnd is tracked in terms of steps (of min_rtt/4), rather than
>   absolute value to account for change in rtt.
> 
> In our internal testing, we observed significant increase in throughput,
> in scenarios where reordering exceeds min_rtt/4 (previous static value).
> 
> Signed-off-by: Priyaranjan Jha <priyarjha@...gle.com>
> Signed-off-by: Yuchung Cheng <ycheng@...gle.com>
> Signed-off-by: Neal Cardwell <ncardwell@...gle.com>

Applied, thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ