lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 7 Aug 2009 21:00:54 +0000
From:	"Fischer, Anna" <anna.fischer@...com>
To:	Yaron Haviv <yaronh@...taire.com>,
	"evb@...oogroups.com" <evb@...oogroups.com>,
	"shemminger@...ux-foundation.org" <shemminger@...ux-foundation.org>
CC:	"bridge@...ts.linux-foundation.org" 
	<bridge@...ts.linux-foundation.org>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	"virtualization@...ts.linux-foundation.org" 
	<virtualization@...ts.linux-foundation.org>,
	"davem@...emloft.net" <davem@...emloft.net>,
	"kaber@...sh.net" <kaber@...sh.net>,
	"adobriyan@...il.com" <adobriyan@...il.com>,
	"arnd@...db.de" <arnd@...db.de>,
	"Paul Congdon (UC Davis)" <ptcongdon@...avis.edu>
Subject: RE: [evb] RE: [PATCH][RFC] net/bridge: add basic VEPA support

Hi Yaron,

Yes, I also believe that VEPA + SRIOV can potentially, in some deployments, achieve better performance than a bridge/tap configuration, especially when you run multiple VMs and if you want to enable more sophisticated network processing in the data path.

If you do have a SRIOV NIC that supports VEPA, then I would think that you do not have QEMU or macvtap in the setup any more though. Simply because in that case the VM can directly access the VF on the physical device. That would be ideal.

I do think that the macvtap driver is a good addition as a simple and fast virtual network I/O interface, in case you do not need full bridge functionality. It does seem to assume though that the virtualization software uses QEMU/tap interfaces. How would this work with a Xen para-virtualized network interface? I guess there would need to be yet another driver?

Anna

--

From: Yaron Haviv [mailto:yaronh@...taire.com] 
Sent: 07 August 2009 21:36
To: evb@...oogroups.com; shemminger@...ux-foundation.org; Fischer, Anna
Cc: bridge@...ts.linux-foundation.org; netdev@...r.kernel.org; virtualization@...ts.linux-foundation.org; davem@...emloft.net; kaber@...sh.net; adobriyan@...il.com; arnd@...db.de
Subject: Re: [evb] RE: [PATCH][RFC] net/bridge: add basic VEPA support

Paul,

I also think that bridge may not be the right place for VEPA, but rather a simpler sw/hw mux 
Although the VEPA support may reside in multiple places (I.e. also in the bridge)

As Arnd pointed out Or already added an extension to qemu that allow direct guest virtual NIC mapping to an interface device (vs using tap), this was done specifically to address VEPA, and result in much faster performance and lower cpu overhead (Or and some others are planning additional meaningful performance optimizations) 

The interface multiplexing can be achieved using macvlan driver or using an SR-IOV capable NIC (the preferred option), macvlan may need to be extended to support VEPA multicast handling, this looks like a rather simple task 

It may be counter intuitive for some, but we expect the (completed) qemu VEPA mode + SR-IOV + certain switches with hairpin (vepa) mode to perform faster than using bridge+tap even for connecting 2 VMs on the same host


Yaron 

Sent from BlackBerry
________________________________________
From: evb@...oogroups.com 
To: 'Stephen Hemminger' ; 'Fischer, Anna' 
Cc: bridge@...ts.linux-foundation.org ; linux-kernel@...r.kernel.org ; netdev@...r.kernel.org ; virtualization@...ts.linux-foundation.org ; evb@...oogroups.com ; davem@...emloft.net ; kaber@...sh.net ; adobriyan@...il.com ; 'Arnd Bergmann' 
Sent: Fri Aug 07 21:58:00 2009
Subject: [evb] RE: [PATCH][RFC] net/bridge: add basic VEPA support 
  
> 
> After reading more about this, I am not convinced this should be part 
> of the bridge code. The bridge code really consists of two parts:
> forwarding table and optional spanning tree. Well the VEPA code short 
> circuits both of these; it can't imagine it working with STP turned 
> on. The only part of bridge code that really gets used by this are the 
> receive packet hooks and the crufty old API.
> 
> So instead of adding more stuff to existing bridge code, why not have 
> a new driver for just VEPA. You could do it with a simple version of 
> macvlan type driver.

Stephen,

Thanks for your comments and questions. We do believe the bridge code is
the right place for this, so I'd like to embellish on that a bit more to
help persuade you. Sorry for the long winded response, but here are some
thoughts:

- First and foremost, VEPA is going to be a standard addition to the IEEE
802.1Q specification. The working group agreed at the last meeting to
pursue a project to augment the bridge standard with hairpin mode (aka
reflective relay) and a remote filtering service (VEPA). See for details:
http://www.ieee802.org/1/files/public/docs2009/new-evb-congdon-evbPar5C-0709
-v01.pdf

- The VEPA functionality was really a pretty small change to the code with
low risk and wouldn't seem to warrant an entire new driver or module.

- There are good use cases where VMs will want to have some of their
interfaces attached to bridges and others to bridges operating in VEPA mode.
In other words, we see simultaneous operation of the bridge code and VEPA
occurring, so having as much of the underlying code as common as possible
would seem to be beneficial. 

- By augmenting the bridge code with VEPA there is a great amount of re-use
achieved. It works wherever the bridge code works and doesn't need anything
special to support KVM, XEN, and all the hooks, etc...

- The hardware vendors building SR-IOV NICs with embedded switches will be
adding VEPA mode, so by keeping the bridge module in sync would be
consistent with this trend and direction. It will be possible to extend the
hardware implementations by cascading a software bridge and/or VEPA, so
being in sync with the architecture would make this more consistent.

- The forwarding table is still needed and used on inbound traffic to
deliver frames to the correct virtual interfaces and to filter any reflected
frames. A new driver would have to basically implement an equivalent
forwarding table anyway. As I understand the current macvlan type driver,
it wouldn't filter multicast frames properly without such a table.

- It seems the hairpin mode would be needed in the bridge module whether
VEPA was added to the bridge module or a new driver. Having the associated
changes together in the same code could aid in understanding and deployment.

As I understand the macvlan code, it currently doesn't allow two VMs on the
same machine to communicate with one another. I could imagine a hairpin
mode on the adjacent bridge making this possible, but the macvlan code would
need to be updated to filter reflected frames so a source did not receive
his own packet. I could imagine this being done as well, but to also
support selective multicast usage, something similar to the bridge
forwarding table would be needed. I think putting VEPA into a new driver
would cause you to implement many things the bridge code already supports.
Given that we expect the bridge standard to ultimately include VEPA, and the
new functions are basic forwarding operations, it seems to make most sense
to keep this consistent with the bridge module.

Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ