lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-Id: <58396856-6D7E-4CE1-8D66-D1F11205B0D5@simula.no> Date: Thu, 29 Oct 2009 14:51:11 +0100 From: Andreas Petlund <apetlund@...ula.no> To: Ilpo Järvinen <ilpo.jarvinen@...sinki.fi> Cc: Arnd Hannemann <hannemann@...s.rwth-aachen.de>, Eric Dumazet <eric.dumazet@...il.com>, Netdev <netdev@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>, shemminger@...tta.com, David Miller <davem@...emloft.net> Subject: Re: [PATCH 2/3] net: TCP thin linear timeouts Den 28. okt. 2009 kl. 15.31 skrev Ilpo Järvinen: > On Wed, 28 Oct 2009, Arnd Hannemann wrote: > >> Eric Dumazet schrieb: >>> Andreas Petlund a écrit : >>>> This patch will make TCP use only linear timeouts if the stream is >>>> thin. This will help to avoid the very high latencies that thin >>>> stream suffer because of exponential backoff. This mechanism is >>>> only >>>> active if enabled by iocontrol or syscontrol and the stream is >>>> identified as thin. > > ...I don't see how high latency is in any connection to stream being > "thin" or not btw. If all ACKs are lost it usually requires silence > for > the full RTT, which affects a stream regardless of its size. ...If > not all > ACKs are lost, then the dupACK approach in the other patch should > cover > it already. > The increased latency that we observed does not arise from lost ACKs, but from the lack of enough packets in flight to be able to trigger fast retransmits. This effectively limits the retransmission options to retransmission by timeout, which again will increase exponentially with each subsequent retransmissions. We have also found that the "thin" stream patterns are very often generated by applications where human interaction is the purpose. Such applications will give a degraded experience to the user if such high latencies happen often. In-depth discussion of these effects can be found in the papers I linked to. If the application produces less than one packet per RTT, the dupACK- modification will be ineffective and any improved latency will be from linear timeouts. If the number of packets in flight are 2-4, no fast retransmissions may be triggered based on the 3 dupACK scheme, but a retransmission upon the first indication of loss will improve retransmission latency. >> However, addressing the proposal: >> I wonder how one can seriously suggest to just skip congestion >> response >> during timeout-based loss recovery? I believe that in a heavily >> congested scenarios, this would lead to a goodput disaster... Not to >> mention that in a heavily congested scenario, suddenly every flow >> will >> become "thin", so this will even amplify the problems. Or did I miss >> something? > > Good point. I suppose such an under-provisioned network can > certainly be > there. I have heard that at least some people who remove exponential > back > off apply it later on nth retransmission as very often there really > isn't > such a super heavy congestion scenario but something completely > unrelated > to congestion which causes the RTO. > > -- > i. The removal of exponential backoff on a general basis has been investigated and discussed already, for instance here: http://ccr.sigcomm.org/online/?q=node/416 Such steps are, however considered drastic, and I agree that caution must be made to thoroughly investigate the effects of such changes. The changes introduced by the proposed patches, however, are not default behaviour, but an option for applications that suffer from the thin-stream TCP increased retransmission latencies. They will, as such, not affect all streams. In addition, the changes will only be active for streams which are perpetually thin or in the early phase of expanding their cwnd. Also, experiments performed on congested bottlenecks with tail-drop queues show very little (if any at all) effect on goodput for the modified scenario compared to a scenario with unmodified TCP streams. Graphs both for latency-results and fairness tests can be found here: http://folk.uio.no/apetlund/lktmp/ -AP-- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists