lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 8 Jul 2016 11:12:05 -0400
From:	Hannes Frederic Sowa <hannes@...essinduktion.org>
To:	Eric Dumazet <eric.dumazet@...il.com>,
	Paolo Abeni <pabeni@...hat.com>
Cc:	netdev@...r.kernel.org, "David S. Miller" <davem@...emloft.net>,
	Jesse Gross <jesse@...nel.org>,
	Tom Herbert <tom@...bertland.com>, Jiri Benc <jbenc@...hat.com>
Subject: Re: [PATCH net-next 3/4] vxlan: remove gro_cell support

Hi Eric,

On 07.07.2016 12:13, Eric Dumazet wrote:
> On Thu, 2016-07-07 at 17:58 +0200, Paolo Abeni wrote:
>> GRO is now handled entirely by the udp_offload layer and  there is no need
>> for trying it again at the device level. We can drop gro_cell usage,
>> simplifying the driver a bit, while maintaining the same performance for
>> TCP and improving slightly for UDP.
>> This basically reverts the commit 58ce31cca1ff ("vxlan: GRO support
>> at tunnel layer")
> 
> Note that gro_cells provide GRO support after RPS, so this helps when we
> must perform TCP checksum computation, if NIC lacks CHECKSUM_COMPLETE
> 
> (Say we receive packets all steered to a single RX queue due to RSS hash
> being computed on outer header only)
> 
> Some people disable GRO on the physical device, but enable GRO on the
> tunnels.

we are currently discussing your feedback and wonder how much it makes
sense to support such a scenario?

We have part of the inner hash in the outer UDP source port. So even the
outer hash does provide enough entropy to get frames of one tunnel on
multiple CPUs via hardware hashing - given that you don't care about OoO
for UDP (I infer that from the fact that RPS will also reorder UDP
frames in case of fragmentation).

I wonder why it makes sense to still take single RX queue nics into
consideration? We already provide support for multiqueue devices for
most VM-related interfaces as well. Can you describe why someone would
do such a scenario?

Thank you,
Hannes

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ