lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 14 Feb 2014 14:06:35 +0000
From:	Wei Liu <wei.liu2@...rix.com>
To:	"Andrew J. Bennieston" <andrew.bennieston@...rix.com>
CC:	<xen-devel@...ts.xenproject.org>, <ian.campbell@...rix.com>,
	<wei.liu2@...rix.com>, <paul.durrant@...rix.com>,
	<netdev@...r.kernel.org>
Subject: Re: [PATCH V2 net-next 0/5] xen-net{back,front}: Multiple transmit
 and receive queues

On Fri, Feb 14, 2014 at 11:50:19AM +0000, Andrew J. Bennieston wrote:
> 
> This patch series implements multiple transmit and receive queues (i.e.
> multiple shared rings) for the xen virtual network interfaces.
> 
> The series is split up as follows:
>  - Patches 1 and 3 factor out the queue-specific data for netback and
>     netfront respectively, and modify the rest of the code to use these
>     as appropriate.
>  - Patches 2 and 4 introduce new XenStore keys to negotiate and use
>    multiple shared rings and event channels, and code to connect these
>    as appropriate.
>  - Patch 5 documents the XenStore keys required for the new feature
>    in include/xen/interface/io/netif.h
> 
> All other transmit and receive processing remains unchanged, i.e. there
> is a kthread per queue and a NAPI context per queue.
> 
> The performance of these patches has been analysed in detail, with
> results available at:
> 
> http://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-queue_performance_testing
> 
> To summarise:
>   * Using multiple queues allows a VM to transmit at line rate on a 10
>     Gbit/s NIC, compared with a maximum aggregate throughput of 6 Gbit/s
>     with a single queue.
>   * For intra-host VM--VM traffic, eight queues provide 171% of the
>     throughput of a single queue; almost 12 Gbit/s instead of 6 Gbit/s.
>   * There is a corresponding increase in total CPU usage, i.e. this is a
>     scaling out over available resources, not an efficiency improvement.
>   * Results depend on the availability of sufficient CPUs, as well as the
>     distribution of interrupts and the distribution of TCP streams across
>     the queues.
> 
> Queue selection is currently achieved via an L4 hash on the packet (i.e.
> TCP src/dst port, IP src/dst address) and is not negotiated between the
> frontend and backend, since only one option exists. Future patches to
> support other frontends (particularly Windows) will need to add some
> capability to negotiate not only the hash algorithm selection, but also
> allow the frontend to specify some parameters to this.
> 

This has an impact on the protocol. If the key to select hash algorithm
is missing then we're assuming L4 is in use.

This either needs to be documented (which is missing in your patch to
netif.h) or you need to write that key explicitly in XenStore.

I also have a question what would happen if one end advertises one hash
algorithm then use a different one. This can happen when the
driver is rogue or buggy. Will it cause the "good guy" to stall? We
certainly don't want to stall backend, at the very least.

I don't see relevant code in this series to handle "rogue other end". I
presume for a simple hash algorithm like L4 is not very important (say,
even a packet ends up in the wrong queue we can still safely process
it), or core driver can deal with this all by itself (dropping)?

Wei.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ