[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <m1wsamcp4n.fsf@fess.ebiederm.org>
Date: Wed, 18 Mar 2009 17:50:16 -0700
From: ebiederm@...ssion.com (Eric W. Biederman)
To: Ryousei Takano <ryousei@...il.com>
Cc: Daniel Lezcano <dlezcano@...ibm.com>,
Linux Containers <containers@...ts.osdl.org>,
Linux Netdev List <netdev@...r.kernel.org>,
lxc-devel@...ts.sourceforge.net
Subject: Re: [lxc-devel] Poor bridging performance on 10 GbE
Ryousei Takano <ryousei@...il.com> writes:
> I am using VServer because other virtualization mechanisms, including OpenVZ,
> Xen, and KVM cannot fully utilize the network bandwidth of 10 GbE.
>
> Here are the results of netperf bencmark:
> vanilla (2.6.27-9) 9525.94
> Vserver (2.6.27.10) 9521.79
> OpenVZ (2.6.27.10) 2049.89
> Xen (2.6.26.1) 1011.47
> KVM (2.6.27-9) 1022.42
>
> Now I am interesting to use LXC instead of VServer.
A good argument.
>>> Using a macvlan device, the throughput was 9.6 Gbps. But, using a veth
>>> device,
>>> the throughput was only 2.7 Gbps.
>>
>> Yeah, definitively the macvlan interfaces is the best in terms of
>> performances but with the restriction of not being able to communicate
>> between containers on the same hosts.
>>
> This restriction is not so big issue for my purpose.
Right. I have been trying to figure out what the best way to cope
with that restriction is.
>>> I also checked the host OS's performance when I used a veth device.
>>> I observed a strange phenomenon.
>>>
>>> Before issuing lxc-start command, the throughput was 9.6 Gbps.
>>> Here is the output of brctl show:
>>> $ brctl show
>>> bridge name bridge id STP enabled interfaces
>>> br0 8000.0060dd470d49 no eth1
>>>
>>> After issuing lxc-start command, the throughput decreased to 3.2 Gbps.
>>> Here is the output of brctl show:
>>> $ sudo brctl show
>>> bridge name bridge id STP enabled interfaces
>>> br0 8000.0060dd470d49 no eth1
>>> veth0_7573
>>>
>>> I wonder why the performance is greatly influenced by adding a veth device
>>> to a bridge device.
>>
>> Hmm, good question :)
Bridging last I looked uses the least common denominator of hardware
offloads. Which likely explains why adding a veth decreased your
bridging performance.
>>> Here is my experimental setting:
>>> OS: Ubuntu server 8.10 amd64
>>> Kernel: 2.6.27-rc8 (checkout from the lxc git repository)
>>
>> I would recommend to use the 2.6.29-rc8 vanilla because this kernel does no
>> longer need patches, a lot of fixes were done in the network namespace and
>> maybe the bridge has been improved in the meantime :)
>>
> I checked out the 2.6.29-rc8 vanilla kernel.
> The performance after issuing lxc-start improved to 8.7 Gbps!
> It's a big improvement, while some performance loss remains.
> Can not we avoid this loss?
Good question. Any chance you can profile this and see where the
performance loss seems to be coming from?
Eric
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists