lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 3 May 2010 17:06:59 -0700
From:	"George B." <georgeb@...il.com>
To:	netdev <netdev@...r.kernel.org>
Subject: Question about vlans, bonding, etc.

Watching the "Receive issues with bonding and vlans" thread brought a
question to mind.  In what order should things be done for best
performance?

For example, say I have a pair of ethernet interfaces.  Do I slave the
ethernet interfaces to the bond device and then make the vlans on the
bond devices?
Or do I make the vlans on the ethernet devices and then bond the vlan
interfaces?

In the first case I would have:



bond0.3--|     |------eth0
             bond0
bond0.5--|     |------eth1

The second case would be:

      |------------------eth0.5-----|
      |          |-------eth0.3---eth0
bond0  bond1
      |          |-------eth1.3---eth1
      |------------------eth1.5-----|

I am using he first method currently as it seemed more intuitive to me
at the time to bond the ethernets and then put the vlans on the bonds
but it seems life might be easier for the vlan driver if it is bound
directly to the hardware.  I am using Intel NICs (igb driver) with 4
queues per NIC.

Would there be a performance difference expected between the two
configurations?  Can the vlan driver "see through" the bond interface
to the
hardware and take advantage of multiple queues if the hardware
supports it in the first configuration?

George Bonser
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ