lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 7 Jun 2016 01:04:06 +0000
From:	Lawrence Brakmo <brakmo@...com>
To:	David Miller <davem@...emloft.net>
CC:	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	Kernel Team <Kernel-team@...com>,
	"ncardwell@...gle.com" <ncardwell@...gle.com>,
	"eric.dumazet@...il.com" <eric.dumazet@...il.com>,
	"ycheng@...gle.com" <ycheng@...gle.com>,
	"stephen@...workplumber.org" <stephen@...workplumber.org>,
	"kennetkl@....uio.no" <kennetkl@....uio.no>
Subject: Re: [PATCH net-next 2/2] tcp: add NV congestion control

On 6/6/16, 4:01 PM, "David Miller" <davem@...emloft.net> wrote:

>From: Lawrence Brakmo <brakmo@...com>
>Date: Fri, 3 Jun 2016 13:37:58 -0700
>
>> +module_param(nv_enable, int, 0644);
>> +MODULE_PARM_DESC(nv_enable, "enable NV (congestion avoidance)
>>behavior");
>> +module_param(nv_pad, int, 0644);
>> +MODULE_PARM_DESC(nv_pad, "extra packets above congestion level");
>> +module_param(nv_pad_buffer, int, 0644);
>> +MODULE_PARM_DESC(nv_pad_buffer, "no growth buffer zone");
>> +module_param(nv_reset_period, int, 0644);
>> +MODULE_PARM_DESC(nv_reset_period, "nv_min_rtt reset period (secs)");
>> +module_param(nv_min_cwnd, int, 0644);
>> +MODULE_PARM_DESC(nv_min_cwnd, "NV will not decrease cwnd below this
>>value"
>> +		 " without losses");
>> +module_param(nv_dec_eval_min_calls, int, 0644);
>> +MODULE_PARM_DESC(nv_dec_eval_min_calls, "Wait for this many data
>>points"
>> +		 " before declaring congestion");
>> +module_param(nv_inc_eval_min_calls, int, 0644);
>> +MODULE_PARM_DESC(nv_inc_eval_min_calls, "Wait for this many data
>>points"
>> +		 " before allowing cwnd growth");
>> +module_param(nv_stop_rtt_cnt, int, 0644);
>> +MODULE_PARM_DESC(nv_stop_rtt_cnt, "Wait for this many RTTs before
>>stopping"
>> +		 " cwnd growth");
>> +module_param(nv_ssthresh_eval_min_calls, int, 0644);
>> +MODULE_PARM_DESC(nv_ssthresh_eval_min_calls, "Wait for this many data
>>points"
>> +		 " before declaring congestion during initial slow-start");
>> +module_param(nv_rtt_min_cnt, int, 0644);
>> +MODULE_PARM_DESC(nv_rtt_min_cnt, "Wait for this many RTTs before
>>declaring"
>> +		 " congestion");
>> +module_param(nv_cong_dec_mult, int, 0644);
>> +MODULE_PARM_DESC(nv_cong_dec_mult, "Congestion decrease factor");
>> +module_param(nv_ssthresh_factor, int, 0644);
>> +MODULE_PARM_DESC(nv_ssthresh_factor, "ssthresh factor");
>> +module_param(nv_rtt_factor, int, 0644);
>> +MODULE_PARM_DESC(nv_rtt_factor, "rtt averaging factor");
>> +module_param(nv_rtt_cnt_dec_delta, int, 0644);
>> +MODULE_PARM_DESC(nv_rtt_cnt_dec_delta, "decrease cwnd for this many
>>RTTs"
>> +		 " every 100 RTTs");
>> +module_param(nv_dec_factor, int, 0644);
>> +MODULE_PARM_DESC(nv_dec_factor, "decrease cwnd every ~192 RTTS by
>>factor/8");
>> +module_param(nv_loss_dec_factor, int, 0644);
>> +MODULE_PARM_DESC(nv_loss_dec_factor, "on loss new cwnd = cwnd * this /
>>1024");
>> +module_param(nv_cwnd_growth_rate_neg, int, 0644);
>> +MODULE_PARM_DESC(nv_cwnd_growth_rate_neg, "Applies when current cwnd
>>growth"
>> +		 " rate < Reno");
>> +module_param(nv_cwnd_growth_rate_pos, int, 0644);
>> +MODULE_PARM_DESC(nv_cwnd_growth_rate_pos, "Applies when current cwnd
>>growth"
>> +		 " rate >= Reno");
>> +module_param(nv_min_min_rtt, int, 0644);
>> +MODULE_PARM_DESC(nv_min_min_rtt, "lower bound for ca->nv_min_rtt");
>> +module_param(nv_max_min_rtt, int, 0644);
>> +MODULE_PARM_DESC(nv_max_min_rtt, "upper bound for ca->nv_min_rtt");
>
>That's a disturbingly huge number of module parameters.  Even the first
>one "nv_enable" is superfluous, just hook up another congestion control
>algorithm.
>
>Please trim this down to something reasonable.  The more of these you
>have,
>the more of them people will start wanting to use and depend upon, and
>then
>you're stuck with them whether you like it or not.

I will trim them down to something much smaller. My original intent was to
support experimentation, but those interested can use this version of the
patch to play with a larger number of parameters/tunables.

In regards to the parameter ³nv_enable², its function is to quickly allow
existing NV flows to revert to Reno behavior (by ³echo 0 >
/sys/module/tcp_nv/parameters/nv_enable²). The reason why this may be
necessary, is that under some conditions NV flows competing with more
aggressive flows will perform poorly. People may be more willing to test
NV if there is a quick and simple way to revert if necessary (without
having to restart the flows).

As to why NV could perform poorly, it has to do with the inherent
unfairness between flows that only decrease their cwnd when there are
losses (congestion control) and flows that decrease their cwnd when they
detect queue buildup (congestion avoidance) unless there is network
support. This is specially true when the RTTs for both types are small
(traffic within a DC), but the unfairness decreases as the RTTs of the
congestion control flows increases.

So, is it okay if I leave the parameter ³nv_enable² or would you still
prefer that I remove it?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ