lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADVnQymDr2K7z3yfKpW-H3R3W3NP+iuPQF2eMfeyS6dn-szdgA@mail.gmail.com>
Date:	Wed, 23 Oct 2013 22:37:00 -0400
From:	Neal Cardwell <ncardwell@...gle.com>
To:	Stephen Hemminger <stephen@...workplumber.org>
Cc:	Eric Dumazet <eric.dumazet@...il.com>,
	David Miller <davem@...emloft.net>,
	Dave Täht <dave.taht@...ferbloat.net>,
	Netdev <netdev@...r.kernel.org>
Subject: Re: 16% regression on 10G caused by TCP small queues

On Wed, Oct 23, 2013 at 10:29 PM, Stephen Hemminger
<stephen@...workplumber.org> wrote:
> In the course of testing routing functionality, I discovered a that the single flow TCP
> throughput was much worse than expected. At first, it looked like a router problem,
> or maybe because one end was a FreeBSD system (which has noticeably slower TCP performance).
> But reducing it down to two systems directly connected over 10G (ixgbe) found the problem.
...
>   4. Do something smarter like a dynamic TCP small queue that adapts.

Yep, Eric made TSQ dynamic a few weeks ago, and mentioned that his
commit helps a single flow on 10Gbps link:

http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=c9eeec26e32e087359160406f96e0949b3cc6f10

Can you please check the performance in your setup on 3.12-rc4 or newer? :-)

Thanks!

neal

---

commit c9eeec26e32e087359160406f96e0949b3cc6f10
Author: Eric Dumazet <edumazet@...gle.com>
Date:   Fri Sep 27 03:28:54 2013 -0700

    tcp: TSQ can use a dynamic limit

    When TCP Small Queues was added, we used a sysctl to limit amount of
    packets queues on Qdisc/device queues for a given TCP flow.

    Problem is this limit is either too big for low rates, or too small
    for high rates.

    Now TCP stack has rate estimation in sk->sk_pacing_rate, and TSO
    auto sizing, it can better control number of packets in Qdisc/device
    queues.

    New limit is two packets or at least 1 to 2 ms worth of packets.

    Low rates flows benefit from this patch by having even smaller
    number of packets in queues, allowing for faster recovery,
    better RTT estimations.

    High rates flows benefit from this patch by allowing more than 2 packets
    in flight as we had reports this was a limiting factor to reach line
    rate. [ In particular if TX completion is delayed because of coalescing
    parameters ]

    Example for a single flow on 10Gbp link controlled by FQ/pacing

    14 packets in flight instead of 2
    ...
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ