lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4D79FE35.50601@ncsu.edu>
Date:	Fri, 11 Mar 2011 05:49:25 -0500
From:	Injong Rhee <injongrhee@...il.com>
To:	Lucas Nussbaum <lucas.nussbaum@...ia.fr>
CC:	Stephen Hemminger <shemminger@...tta.com>, davem@...emloft.net,
	sangtae.ha@...il.com, netdev@...r.kernel.org
Subject: Re: [PATCH 0/6] TCP CUBIC and Hystart

I think the problem is still in clock resolution (i.e., in use of Hz 
250). I will look into the issue some more.

On 3/11/11 5:28 AM, Lucas Nussbaum wrote:
> On 10/03/11 at 08:51 -0800, Stephen Hemminger wrote:
>> This patch set is my attempt at addressing the problems discovered
>> by Lucas Nussbaum.
> With those patches applied (and the fix I mentioned separately), it
> works much better (still with HZ=250).
>
> When a delayed ack train is detected, slow start ends with cwnd ~= 580
> (sometimes a bit lower).
> When no delayed ack train is detected, slow start ends with the detection of the
> delay increase at cwnd in the [700:1100] range.
>
> performance is still not as good as without hystart, but it is more
> acceptable:
>
> nuttcp -i1 -n1g graphene-34.nancy.grid5000.fr
>     94.8125 MB /   1.00 sec =  795.3059 Mbps     0 retrans
>    112.2500 MB /   1.00 sec =  941.6325 Mbps     0 retrans
>    112.2500 MB /   1.00 sec =  941.6222 Mbps     0 retrans
>    112.2500 MB /   1.00 sec =  941.6335 Mbps     0 retrans
>    112.2500 MB /   1.00 sec =  941.6354 Mbps     0 retrans
>    112.2500 MB /   1.00 sec =  941.6231 Mbps     0 retrans
>    112.2500 MB /   1.00 sec =  941.5883 Mbps     0 retrans
>    112.2500 MB /   1.00 sec =  941.6297 Mbps     0 retrans
>    112.2500 MB /   1.00 sec =  941.6391 Mbps     0 retrans
>
>   1024.0000 MB /   9.29 sec =  924.7155 Mbps 14 %TX 28 %RX 0 retrans 11.39 msRTT
> During that run, no ack train was detected, but delay increase was detected when cwnd=1105:
> hystart_update: cwnd=1105 ssthresh=1105 fnd=2 hs_det=3   cur_rtt=122 delay_min=90 DELTRE=16
>
> However:
> echo 1>  /proc/sys/net/ipv4/route/flush; nuttcp -i1 -n1g graphene-34.nancy.grid5000.fr
>     49.5000 MB /   1.00 sec =  415.2278 Mbps     0 retrans
>     59.0000 MB /   1.00 sec =  494.9318 Mbps     0 retrans
>     62.1875 MB /   1.00 sec =  521.6535 Mbps     0 retrans
>     64.1250 MB /   1.00 sec =  537.9329 Mbps     0 retrans
>     67.0625 MB /   1.00 sec =  562.5486 Mbps     0 retrans
>     69.4375 MB /   1.00 sec =  582.4840 Mbps     0 retrans
>     72.3750 MB /   1.00 sec =  607.1395 Mbps     0 retrans
>     75.3125 MB /   1.00 sec =  631.7557 Mbps     0 retrans
>     83.1250 MB /   1.00 sec =  697.2975 Mbps     0 retrans
>     94.3125 MB /   1.00 sec =  791.1569 Mbps     0 retrans
>    107.6250 MB /   1.00 sec =  902.8194 Mbps     0 retrans
>    112.2500 MB /   1.00 sec =  941.6231 Mbps     0 retrans
>
>   1024.0000 MB /  12.97 sec =  662.2669 Mbps 10 %TX 20 %RX 0 retrans 11.39 msRTT
> [ 3050.712333] found ACK TRAIN: cwnd=493 now=2757023598 ca->last_ack=2757023598 ca->round_start=2757023593 ca->delay_min=90 delay_min>>4=5
> [ 3050.726045] hystart_update: cwnd=493 ssthresh=493 fnd=1 hs_det=3   cur_rtt=91 delay_min=90 DELTRE=16
> (delayed ack train detected when cwnd=493 =>  slower convergence)
>
> It seems that the ack train length detection is still a bit too sensitive.
> Changing:
> 	if ((s32)(now - ca->round_start)>= ca->delay_min>>  4)
> To:
> 	if ((s32)(now - ca->round_start)>  ca->delay_min>>  4)
> makes things slightly better, but slow start still exits too early. (optimal cwnd=941).
>
> I'm not sure if we can really do something more about that. The detection by
> ack train length is inherently more likely to trigger false positives since all
> acks are considered, not just a few acks at the beginning of the train.  I'm
> tempted to suggest to disable the ack train length detection by default, but
> then it probably solves problems for other people, and the decrease in
> performance is more acceptable now.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ