lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <CADVnQym47_uqqKWkGnu7hA+vhHjvURMmTdd0Xx6z8m_mspwFJw@mail.gmail.com> Date: Wed, 10 Aug 2022 08:43:44 -0400 From: Neal Cardwell <ncardwell@...gle.com> To: Yonglong Li <liyonglong@...natelecom.cn> Cc: netdev@...r.kernel.org, davem@...emloft.net, edumazet@...gle.com, ycheng@...gle.com, dsahern@...nel.org, kuba@...nel.org, pabeni@...hat.com Subject: Re: [PATCH v2] tcp: adjust rcvbuff according copied rate of user space On Wed, Aug 10, 2022 at 3:49 AM Yonglong Li <liyonglong@...natelecom.cn> wrote: > > every time data is copied to user space tcp_rcv_space_adjust is called. > current It adjust rcvbuff by the length of data copied to user space. > If the interval of user space copy data from socket is not stable, the > length of data copied to user space will not exactly show the speed of > copying data from rcvbuff. > so in tcp_rcv_space_adjust it is more reasonable to adjust rcvbuff by > copied rate (length of copied data/interval)instead of copied data len > > I tested this patch in simulation environment by Mininet: > with 80~120ms RTT / 1% loss link, 100 runs > of (netperf -t TCP_STREAM -l 5), and got an average throughput > of 17715 Kbit instead of 17703 Kbit. > with 80~120ms RTT without loss link, 100 runs of (netperf -t > TCP_STREAM -l 5), and got an average throughput of 18272 Kbit > instead of 18248 Kbit. So with 1% emulated loss that's a 0.06% throughput improvement and without emulated loss that's a 0.13% improvement. That sounds like it may well be statistical noise, particularly given that we would expect the steady-state impact of this change to be negligible. IMHO these results do not justify the added complexity and state. best regards, neal
Powered by blists - more mailing lists