lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Sat, 2 Sep 2017 22:33:20 +0200
From:   Natale Patriciello <natale.patriciello@...il.com>
To:     Eric Dumazet <eric.dumazet@...il.com>
Cc:     netdev <netdev@...r.kernel.org>,
        Ahmed Said <ahmed.said@...roma2.it>,
        Francesco Zampognaro <zampognaro@....uniroma2.it>,
        Cesare Roseti <roseti@....uniroma2.it>
Subject: Re: [RFC PATCH v1 0/5] TCP Wave

Hello all,
first of all, we would like to thank everyone that commented our
patches; we are working to include all the suggestions (we are at a good
point).

However, we have two questions:

1) How to retrieve information about each connection? Right now we used
debug messages, but we understand it isn't the best option. TCP Wave
users have other values to track rather than congestion window and slow
start threshold. It seems we have two alternatives: (a) use get_info,
that returns strings to be read with ss, (b) open a file under /proc/net
and write data to it, in the same way as tcp_probe do. With option (a)
it is necessary a poll from userspace (for instance, using watch), but
is subjected to delays and maybe not suitable for fast connections
(watch minimum interval is 100 ms). Option (b), toggled with a module
parameter, seems the more viable. Is that correct?

The second one is inline:

On 28/07/17 at 10:33pm, Eric Dumazet wrote:
> On Fri, 2017-07-28 at 21:59 +0200, Natale Patriciello wrote:
> > Hi,
[cut]
> > TCP Wave (TCPW) replaces the window-based transmission paradigm of the standard
> > TCP with a burst-based transmission, the ACK-clock scheduling with a
> > self-managed timer and the RTT-based congestion control loop with an Ack-based
> > Capacity and Congestion Estimation (ACCE) module. In non-technical words, it
> > sends data down the stack when its internal timer expires, and the timing of
> > the received ACKs contribute to updating this timer regularly.
>
> This patch series seems to have missed recent efforts in TCP stack,
> namely TCP pacing.
>
> commit 218af599fa635b107cfe10acf3249c4dfe5e4123 ("tcp: internal
> implementation for pacing") added a timer already to get fine grained
> packet xmits.

Thank you, Eric, for this suggestion; in fact, we had problems with our
implementation of the timer, and we would like to switch to the new
pacing timer entirely. However, a pacing approach is exactly the
opposite we would like to achieve: we want to send a burst of data
(let's say, ten segments) and then wait some amount of time. Do you
think that adding a new congestion control callback that returns the
number of segments to send when the timer expires (default to 1) and another
callback for retrieving the pacing time can be a sound strategy?

Thank you again, have a nice day.

Natale

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ