lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAJZOPZJACTX1giFudwbBqppE_GyQVzamNqiO61vL3d0eoaJoFQ@mail.gmail.com>
Date:	Sun, 8 Dec 2013 22:12:32 +0200
From:	Or Gerlitz <or.gerlitz@...il.com>
To:	Joseph Gasparakis <joseph.gasparakis@...el.com>
Cc:	Or Gerlitz <ogerlitz@...lanox.com>,
	Pravin B Shelar <pshelar@...ira.com>,
	Eric Dumazet <eric.dumazet@...il.com>,
	Jerry Chu <hkchu@...gle.com>,
	Eric Dumazet <edumazet@...gle.com>,
	Alexei Starovoitov <ast@...mgrid.com>,
	David Miller <davem@...emloft.net>,
	netdev <netdev@...r.kernel.org>,
	"Kirsher, Jeffrey T" <jeffrey.t.kirsher@...el.com>,
	John Fastabend <john.fastabend@...il.com>
Subject: Re: vxlan/veth performance issues on net.git + latest kernels

On Sun, Joseph Gasparakis <joseph.gasparakis@...el.com> wrote:

>> What I saw is that if I leave the DODGY bit set, practically things
>> don't work at all, its not that some packets are dropped, was that
>> what you saw?

> What I saw was gso packets badly segmented, causing many re-transmissions
> and dropping the performance to a few MB/s.

Yes, in my testbed upto about 400Mbs (b not B..., yes!)


>> Also, did you hack/modified the VM NIC MTU to take into the account
>> the encapsulation overhead?

> The virtio interfaces I used had MTU 1500, but the MTU of the physical NIC
> was increased to 1600.

mmm, that's sort of equivalent, but zero touch VM wise, nice!


> I have only noticed this with the offloads on. Turning off encapsuation
> TSO off, would simply make the gso's to get segmented in dev_hard_xmit()
> as expected.

mmm, I am not sure this is the case with kernels > 3.10.x, but I'd
like to double check that, basically, its possible that I didn't make
sure to always have "proper" MTU at the VM @ all times.

Also, did you see the unsimilarity between TX/RX which I reported
earlier today, that is accelerated TX from single VM can go as far as
> 30Gbs while RX to single VM or even multiple VMs doesn't go beyond
5-6Gbs probably as of the lack of GRO?
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ