[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <21433.1271354986@death.nxdomain.ibm.com>
Date: Thu, 15 Apr 2010 11:09:46 -0700
From: Jay Vosburgh <fubar@...ibm.com>
To: Eric Dumazet <eric.dumazet@...il.com>
cc: "George B." <georgeb@...il.com>, netdev@...r.kernel.org
Subject: Re: Network multiqueue question
Eric Dumazet <eric.dumazet@...il.com> wrote:
>Le jeudi 15 avril 2010 à 09:58 -0700, George B. a écrit :
>> I am in need of a little education on multiqueue and was wondering if
>> someone here might be able to help me.
>>
>> Given intel igb network driver, it appears I can do something like:
>>
>> tc qdisc del dev eth0 root handle 1: multiq
>>
>> which works and reports 4 bands: dev eth0 root refcnt 4 bands 4/4
>>
>> But our network is a little more complicated. Above the ethernet we
>> have the bonding driver which is using mode 2 bonding with two
>> ethernet slaves. Then we have vlans on the bond interface. Our
>> production traffic is on a vlan and resource contention is an issue as
>> these are busy machines.
>>
>> It is my understanding that the vlan driver became multiqueue aware in
>> 2.6.32 (we are currently using 2.6.31).
>>
>> It would seem that the first thing the kernel would encounter with
>> traffic headed out would be the vlan interface, and then the bond
>> interface, and then the physical ethernet interface. Is that correct?
>> So with my kernel, I would seem to get no utility from multiq on the
>> ethernet interface if the vlan interface is going to be a
>> single-threaded bottleneck. What about the bond driver? Is it
>> currently multiqueue aware?
>>
>> I am try to get some sort of logical picture of how all these things
>> interact with each other to get things a little more efficient and
>> reduce resource contention in the application while still trying to be
>> efficient in use of network ports/interfaces.
>>
>> If someone feels up to the task of sending a little education my way,
>> I would be most appreciative. There doesn't seem to be a whole lot of
>> documentation floating around about multiqueue other than a blurb of
>> text in the kernel and David's presentation of last year.
>
>Hi George
>
>Vlan is multiqueue aware, but bonding is not unfortunatly at this
>moment.
>
>We could let it being 'multiqueue' (a patch was submitted by Oleg A.
>Arkhangelsky a while ago), but bonding xmit routine needs to lock a
>central lock, shared by all queues, so it wont be very efficient...
The lock is a read lock, so theoretically it should be possible
to enter the bonding transmit function on multiple CPUs at the same
time. The lock may thrash around, though.
>Since this bothers me a bit, I will probably work on this in a near
>future. (adding real multiqueue capability and RCU to bonding fast
>paths)
>
>Ref: http://permalink.gmane.org/gmane.linux.network/152987
The question I have about it (and the above patch), is: what
does multi-queue "awareness" really mean for a bonding device? How does
allocating a bunch of TX queues help, given that the determination of
the transmitting device hasn't necessarily been made?
I haven't had the chance to acquire some multi-queue network
cards and check things out with bonding, so I'm not really sure how it
should work. Should the bond look, from a multi-queue perspective, like
the largest slave, or should it look like the sum of the slaves? Some
of this is may be mode-specific, as well.
-J
---
-Jay Vosburgh, IBM Linux Technology Center, fubar@...ibm.com
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists