[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTikVJqN6m5nsJsFSNHS_HbOFyt0hGr_8MHu6tWDR@mail.gmail.com>
Date: Thu, 13 May 2010 18:10:33 -0700
From: "George B." <georgeb@...il.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: netdev <netdev@...r.kernel.org>
Subject: Re: Question about vlans, bonding, etc.
On Mon, May 3, 2010 at 9:48 PM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> Le lundi 03 mai 2010 à 17:06 -0700, George B. a écrit :
>> Watching the "Receive issues with bonding and vlans" thread brought a
>> question to mind. In what order should things be done for best
>> performance?
>>
>> For example, say I have a pair of ethernet interfaces. Do I slave the
>> ethernet interfaces to the bond device and then make the vlans on the
>> bond devices?
>> Or do I make the vlans on the ethernet devices and then bond the vlan
>> interfaces?
>>
>> In the first case I would have:
>>
>>
>>
>> bond0.3--| |------eth0
>> bond0
>> bond0.5--| |------eth1
>>
>> The second case would be:
>>
>> |------------------eth0.5-----|
>> | |-------eth0.3---eth0
>> bond0 bond1
>> | |-------eth1.3---eth1
>> |------------------eth1.5-----|
>>
>> I am using he first method currently as it seemed more intuitive to me
>> at the time to bond the ethernets and then put the vlans on the bonds
>> but it seems life might be easier for the vlan driver if it is bound
>> directly to the hardware. I am using Intel NICs (igb driver) with 4
>> queues per NIC.
>>
>> Would there be a performance difference expected between the two
>> configurations? Can the vlan driver "see through" the bond interface
>> to the
>> hardware and take advantage of multiple queues if the hardware
>> supports it in the first configuration?
>
> Unfortunatly, first combination is not multiqueue aware yet.
>
> You'll need to patch bonding driver like this if your nics have 4
> queues :
>
> diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
> index 85e813c..98cc3c0 100644
> --- a/drivers/net/bonding/bond_main.c
> +++ b/drivers/net/bonding/bond_main.c
> @@ -4915,8 +4915,8 @@ int bond_create(struct net *net, const char *name)
>
> rtnl_lock();
>
> - bond_dev = alloc_netdev(sizeof(struct bonding), name ? name : "",
> - bond_setup);
> + bond_dev = alloc_netdev_mq(sizeof(struct bonding), name ? name : "",
> + bond_setup, 4);
> if (!bond_dev) {
> pr_err("%s: eek! can't alloc netdev!\n", name);
> rtnl_unlock();
>
>
>
I just got around to fooling with this some. It would seem to me that
I should be able to get better performance if I could create the vlans
on the ethernet interfaces and then bond them together. For example,
it seems intuitive that I should be able to create vlan eth0.5 and
eth1.5 and then enslave them. Problem is that when I try to create
vlan5 on the second interface, vconfig balks that it already exists.
Yes, I know it exists, but I want vlan5 on two interfaces and I want
to use ifenslave to bond them together into a bond interface. So if I
have 10 vlans, I would have 10 vlans on each ethernet interface and 10
bond interfaces. The way it seems I am forced to do it now is bond
the two NICs together and add all the vlans to the single bond
interface. It seems that the bond interface would then become a
bottleneck for all the vlans.
Is there some physical reason why it is not possible to create the
same vlan on multiple interfaces as long as the naming convention
keeps them named separately so they can be distinguished from each
other?
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists