lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 02 Nov 2010 10:29:45 +0100
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Simon Horman <horms@...ge.net.au>
Cc:	netdev@...r.kernel.org, Jay Vosburgh <fubar@...ibm.com>,
	"David S. Miller" <davem@...emloft.net>
Subject: Re: bonding: flow control regression [was Re: bridging: flow
 control regression]

Le mardi 02 novembre 2010 à 17:46 +0900, Simon Horman a écrit :

> Thanks Eric, that seems to resolve the problem that I was seeing.
> 
> With your patch I see:
> 
> No bonding
> 
> # netperf -c -4 -t UDP_STREAM -H 172.17.60.216 -l 30 -- -m 1472
> UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 172.17.60.216 (172.17.60.216) port 0 AF_INET
> Socket  Message  Elapsed      Messages                   CPU      Service
> Size    Size     Time         Okay Errors   Throughput   Util     Demand
> bytes   bytes    secs            #      #   10^6bits/sec % SU     us/KB
> 
> 116736    1472   30.00     2438413      0      957.2     8.52     1.458 
> 129024           30.00     2438413             957.2     -1.00    -1.000
> 
> With bonding (one slave, the interface used in the test above)
> 
> netperf -c -4 -t UDP_STREAM -H 172.17.60.216 -l 30 -- -m 1472
> UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 172.17.60.216 (172.17.60.216) port 0 AF_INET
> Socket  Message  Elapsed      Messages                   CPU      Service
> Size    Size     Time         Okay Errors   Throughput   Util     Demand
> bytes   bytes    secs            #      #   10^6bits/sec % SU     us/KB
> 
> 116736    1472   30.00     2438390      0      957.1     8.97     1.535 
> 129024           30.00     2438390             957.1     -1.00    -1.000
> 


Sure the patch helps when not too many flows are involved, but this is a
hack.

Say the device queue is 1000 packets, and you run a workload with 2000
sockets, it wont work...

Or device queue is 1000 packets, one flow, and socket send queue size
allows for more than 1000 packets to be 'in flight' (echo 2000000
>/proc/sys/net/core/wmem_default) , it wont work too with bonding, only
with devices with a qdisc sitting in the first device met after the
socket.



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ