lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1382595011.7572.36.camel@edumazet-glaptop.roam.corp.google.com>
Date:	Wed, 23 Oct 2013 23:10:11 -0700
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Stephen Hemminger <stephen@...workplumber.org>
Cc:	David Miller <davem@...emloft.net>, ncardwell@...gle.com,
	dave.taht@...ferbloat.net, netdev@...r.kernel.org
Subject: Re: 16% regression on 10G caused by TCP small queues

On Wed, 2013-10-23 at 21:45 -0700, Stephen Hemminger wrote:
> On Wed, 23 Oct 2013 23:38:16 -0400 (EDT)
> David Miller <davem@...emloft.net> wrote:
> 
> > From: Stephen Hemminger <stephen@...workplumber.org>
> > Date: Wed, 23 Oct 2013 20:09:49 -0700
> > 
> > > I will check 3.12, but what about users on 3.10 which is the LTS
> > > kernel used by most distros?
> 
> 3.12-rc6 gets line rate again (9.41 Gbit/sec)
> 
> > The fix will be backported to -stable, relax Stephen.
> 
> Sorry, thought sk_pacing_rate depended on FQ qdisc but it is other way around.
> In which case doing merge of these two was sufficient to fix the problem.
> With a minor manual fix up to tcp.h.

Btw what is the NIC you are using ?

I also had to patch mlx4 driver because it had too big coalescing
parameters, it was before TCP Small Queue dynamic sizing, but its worth
noting that for the initial ramp up (when flow sk_pacing_rate is low
because initial cwin is 10), it might make a difference

This was 

commit ecfd2ce1a9d5e6376ff5c00b366345160abdbbb7
Author: Eric Dumazet <edumazet@...gle.com>
Date:   Mon Nov 5 16:20:42 2012 +0000

    mlx4: change TX coalescing defaults
    
    mlx4 currently uses a too high tx coalescing setting, deferring
    TX completion interrupts by up to 128 us.
    
    With the recent skb_orphan() removal in commit 8112ec3b872,
    performance of a single TCP flow is capped to ~4 Gbps, unless
    we increase tcp_limit_output_bytes.
    
    I suggest using 16 us instead of 128 us, allowing a finer control.
    
    Performance of a single TCP flow is restored to previous levels,
    while keeping TCP small queues fully enabled with default sysctl.
    
    This patch is also a BQL prereq.


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ