lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 10 Apr 2017 12:31:09 -0700
From:   Eric Dumazet <eric.dumazet@...il.com>
To:     Jarod Wilson <jarod@...hat.com>
Cc:     netdev <netdev@...r.kernel.org>
Subject: Re: Horrid balance-rr bonding udp throughput

On Mon, 2017-04-10 at 14:50 -0400, Jarod Wilson wrote:
> On 2017-04-08 7:33 PM, Jarod Wilson wrote:
> > I'm digging into some bug reports covering performance issues with 
> > balance-rr, and discovered something even worse than the reporter. My 
> > test setup has a pair of NICs, one e1000e, one e1000 (but dual e1000e 
> > seems the same). When I do a test run in LNST with bonding mode 
> > balance-rr and either miimon or arpmon, the throughput of the UDP_STREAM 
> > netperf test is absolutely horrible:
> > 
> > TCP: 941.19 +-0.88 mbits/sec
> > UDP: 45.42 +-4.59 mbits/sec
> > 
> > I figured I'd try LNST's packet capture mode, so exact same test, add 
> > the -p flag and I get:
> > 
> > TCP: 941.21 +-0.82 mbits/sec
> > UDP: 961.54 +-0.01 mbits/sec
> > 
> > Uh. What? So yeah. I can't capture the traffic in the bad case, but I 
> > guess that gives some potential insight into what's not happening 
> > correctly in either the bonding driver or the NIC drivers... More 
> > digging forthcoming, but first I have a flooded basement to deal with, 
> > so if in the interim, anyone has some insight, I'd be happy to hear it. :)
> 
> Okay, ignore the bit about bonding, I should have eliminated the bond 
> from the picture entirely. I think the traffic simply ended up on the 
> e1000 on the non-capture test and on the e1000e for the capture test, as 
> those numbers match perfectly with straight NIC to NIC testing, no bond 
> involved. That said, really odd that the e1000 is so severely crippled 
> for UDP, while TCP is still respectable. Not sure if I have a flaky NIC 
> or what...
> 
> For reference, e1000 to e1000e netperf:
> 
> TCP_STREAM: Measured rate was 849.95 +-1.32 mbits/sec
> UDP_STREAM: Measured rate was 44.73 +-5.73 mbits/sec

In our experiments, we found e1000e had latency issue with UDP packets,
not with TCP.

Try e1000e -> e1000e , problem should persist, right ?





Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ