[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101106092535.GD5128@verge.net.au>
Date: Sat, 6 Nov 2010 18:25:37 +0900
From: Simon Horman <simon@...ms.net>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: netdev@...r.kernel.org, Jay Vosburgh <fubar@...ibm.com>,
"David S. Miller" <davem@...emloft.net>
Subject: Re: bonding: flow control regression [was Re: bridging: flow
control regression]
On Tue, Nov 02, 2010 at 10:29:45AM +0100, Eric Dumazet wrote:
> Le mardi 02 novembre 2010 à 17:46 +0900, Simon Horman a écrit :
>
> > Thanks Eric, that seems to resolve the problem that I was seeing.
> >
> > With your patch I see:
> >
> > No bonding
> >
> > # netperf -c -4 -t UDP_STREAM -H 172.17.60.216 -l 30 -- -m 1472
> > UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 172.17.60.216 (172.17.60.216) port 0 AF_INET
> > Socket Message Elapsed Messages CPU Service
> > Size Size Time Okay Errors Throughput Util Demand
> > bytes bytes secs # # 10^6bits/sec % SU us/KB
> >
> > 116736 1472 30.00 2438413 0 957.2 8.52 1.458
> > 129024 30.00 2438413 957.2 -1.00 -1.000
> >
> > With bonding (one slave, the interface used in the test above)
> >
> > netperf -c -4 -t UDP_STREAM -H 172.17.60.216 -l 30 -- -m 1472
> > UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 172.17.60.216 (172.17.60.216) port 0 AF_INET
> > Socket Message Elapsed Messages CPU Service
> > Size Size Time Okay Errors Throughput Util Demand
> > bytes bytes secs # # 10^6bits/sec % SU us/KB
> >
> > 116736 1472 30.00 2438390 0 957.1 8.97 1.535
> > 129024 30.00 2438390 957.1 -1.00 -1.000
> >
>
>
> Sure the patch helps when not too many flows are involved, but this is a
> hack.
>
> Say the device queue is 1000 packets, and you run a workload with 2000
> sockets, it wont work...
>
> Or device queue is 1000 packets, one flow, and socket send queue size
> allows for more than 1000 packets to be 'in flight' (echo 2000000
> >/proc/sys/net/core/wmem_default) , it wont work too with bonding, only
> with devices with a qdisc sitting in the first device met after the
> socket.
True, thanks for pointing that out.
The scenario that I am actually interested in is virtualisation.
And I believe that your patch helps the vhostnet case (I don't see
flow control problems with bonding + virtio without vhostnet). However,
I am unsure if there are also some easy work-arounds to degrade
flow control in the vhostnet case too.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists