lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b30d1c3b0903182237w7667c5bbt1ea098025f3b5f36@mail.gmail.com>
Date:	Thu, 19 Mar 2009 14:37:58 +0900
From:	Ryousei Takano <ryousei@...il.com>
To:	"Eric W. Biederman" <ebiederm@...ssion.com>
Cc:	Daniel Lezcano <dlezcano@...ibm.com>,
	Linux Containers <containers@...ts.osdl.org>,
	Linux Netdev List <netdev@...r.kernel.org>,
	lxc-devel@...ts.sourceforge.net
Subject: Re: [lxc-devel] Poor bridging performance on 10 GbE

Hi Eric,

On Thu, Mar 19, 2009 at 9:50 AM, Eric W. Biederman
<ebiederm@...ssion.com> wrote:

[snip]

> Bridging last I looked uses the least common denominator of hardware
> offloads.  Which likely explains why adding a veth decreased your
> bridging performance.
>
At least now LRO cannot coexist bridging.
So I disable the LRO feature of the myri10ge driver.

>>>> Here is my experimental setting:
>>>>        OS: Ubuntu server 8.10 amd64
>>>>        Kernel: 2.6.27-rc8 (checkout from the lxc git repository)
>>>
>>> I would recommend to use the 2.6.29-rc8 vanilla because this kernel does no
>>> longer need patches, a lot of fixes were done in the network namespace and
>>> maybe the bridge has been improved in the meantime :)
>>>
>> I checked out the 2.6.29-rc8 vanilla kernel.
>> The performance after issuing lxc-start improved to 8.7 Gbps!
>> It's a big improvement, while some performance loss remains.
>> Can not we avoid this loss?
>
> Good question.  Any chance you can profile this and see where the
> performance loss seems to be coming from?
>
I found out this issue is caused by decreasing the MTU size.
Myri-10G's MTU size is 9000 bytes; the veth' MTU size is 1500 bytes.
After bridging veth, MTU size decreases from 9000 to 1500 bytes.
I changed the veth's MTU size to 9000 bytes, and then I confirmed
the throughput improved to 9.6 Gbps.

The throughput between LXC containers also improved to 4.9 Gbps
by changing the MTU sizes.

So I propose to add lxc.network.mtu into the LXC configuration.
How does that sound?

> Eric
>

Best regards,
Ryousei Takano
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ