lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 18 Mar 2016 18:58:23 +0100
From:	"Bendik Rønning Opstad" <bro.devel@...il.com>
To:	Eric Dumazet <eric.dumazet@...il.com>,
	Bendik Rønning Opstad <bro.devel@...il.com>
Cc:	"David S. Miller" <davem@...emloft.net>, netdev@...r.kernel.org,
	Yuchung Cheng <ycheng@...gle.com>,
	Neal Cardwell <ncardwell@...gle.com>,
	Andreas Petlund <apetlund@...ula.no>,
	Carsten Griwodz <griff@...ula.no>,
	Pål Halvorsen <paalh@...ula.no>,
	Jonas Markussen <jonassm@....uio.no>,
	Kristian Evensen <kristian.evensen@...il.com>,
	Kenneth Klette Jonassen <kennetkl@....uio.no>
Subject: Re: [PATCH v6 net-next 2/2] tcp: Add Redundant Data Bundling (RDB)

On 14/03/16 22:15, Eric Dumazet wrote:
> Acked-by: Eric Dumazet <edumazet@...gle.com>
>
> Note that RDB probably should get some SNMP counters,
> so that we get an idea of how many times a loss could be repaired.

Good idea. Simply count how many times an RDB packet successfully
repaired loss? Note that this can be one or more lost packets. When
bundling N packets, the RDB packet can repair up to N losses in the
previous N packets that were sent.

Which list should this be added to? snmp4_tcp_list?

Any other counters that would be useful? Total number of RDB packets
transmitted?

> Ideally, if the path happens to be lossless, all these pro active
> bundles are overhead. Might be useful to make RDB conditional to
> tp->total_retrans or something.

Yes, that is a good point. We have discussed this (for years really),
but have not had the opportunity to investigate it in-depth. Having
such a condition hard coded is not ideal, as it very much depends on
the use case if bundling from the beginning is desirable. In most
cases, this is probably a fair compromise, but preferably we would
have some logic/settings to control how the bundling rate can be
dynamically adjusted in response to certain events, defined by a set
of given metrics.

A conservative (default) setting would not do bundling until loss has
been registered, and could also check against some smoothed loss
indicator such that a certain amount of loss must have occurred within
a specific time frame to allow bundling. This could be useful in cases
where the network congestion varies greatly depending on such as the
time of day/night.

In a scenario where minimal application layer latency is very
important, but only sporadic (single) packet loss is expected to
regularly occur, always bundling one previous packet may be both
sufficient and desirable.

In the end, the best settings for an application/service depends on
the degree to which application layer latency (both minimal and
variations) affects the QoE.

There are many possibilities to consider in this regard, and I expect
we will not have this question fully explored any time soon. Most
importantly, we should ensure that such logic can easily be added
later on without breaking backwards compatibility.

Suggestions and comments on this are very welcome.


Bendik

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ