[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1271356910.16881.2909.camel@edumazet-laptop>
Date: Thu, 15 Apr 2010 20:41:50 +0200
From: Eric Dumazet <eric.dumazet@...il.com>
To: Jay Vosburgh <fubar@...ibm.com>
Cc: "George B." <georgeb@...il.com>, netdev@...r.kernel.org
Subject: Re: Network multiqueue question
Le jeudi 15 avril 2010 à 11:09 -0700, Jay Vosburgh a écrit :
> Eric Dumazet <eric.dumazet@...il.com> wrote:
>
> >Vlan is multiqueue aware, but bonding is not unfortunatly at this
> >moment.
> >
> >We could let it being 'multiqueue' (a patch was submitted by Oleg A.
> >Arkhangelsky a while ago), but bonding xmit routine needs to lock a
> >central lock, shared by all queues, so it wont be very efficient...
>
> The lock is a read lock, so theoretically it should be possible
> to enter the bonding transmit function on multiple CPUs at the same
> time. The lock may thrash around, though.
>
Yes, and with 10Gb cards, this is a limiting factor, if you want to send
14 million packets per second ;)
read_lock() is one atomic op, dirtying cacheline
read_unlock() is one atomic op, dirtying cache line again (if contended)
in active-passive mode, RCU use should be really easy, given netdevices
are already RCU compatable. This way, each cpu only reads bonding state,
without any memory changes.
> >Since this bothers me a bit, I will probably work on this in a near
> >future. (adding real multiqueue capability and RCU to bonding fast
> >paths)
> >
> >Ref: http://permalink.gmane.org/gmane.linux.network/152987
>
> The question I have about it (and the above patch), is: what
> does multi-queue "awareness" really mean for a bonding device? How does
> allocating a bunch of TX queues help, given that the determination of
> the transmitting device hasn't necessarily been made?
>
Well, it is a problem that was also taken into account with vlan, you
might take a look at this commit :
commit 669d3e0babb40018dd6e78f4093c13a2eac73866
Author: Vasu Dev <vasu.dev@...el.com>
Date: Tue Mar 23 14:41:45 2010 +0000
vlan: adds vlan_dev_select_queue
This is required to correctly select vlan tx queue for a driver
supporting multi tx queue with ndo_select_queue implemented since
currently selected vlan tx queue is unaligned to selected queue by
real net_devce ndo_select_queue.
Unaligned vlan tx queue selection causes thrash with higher vlan
tx lock contention for least fcoe traffic and wrong socket tx
queue_mapping for ixgbe having ndo_select_queue implemented.
-v2
As per Eric Dumazet<eric.dumazet@...il.com> comments, mirrored
vlan net_device_ops to have them with and without
vlan_dev_select_queue
and then select according to real dev ndo_select_queue present or
not
for a vlan net_device. This is to completely skip
vlan_dev_select_queue
calling for real net_device not supporting ndo_select_queue.
Signed-off-by: Vasu Dev <vasu.dev@...el.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@...el.com>
Acked-by: Eric Dumazet <eric.dumazet@...il.com>
Signed-off-by: David S. Miller <davem@...emloft.net>
> I haven't had the chance to acquire some multi-queue network
> cards and check things out with bonding, so I'm not really sure how it
> should work. Should the bond look, from a multi-queue perspective, like
> the largest slave, or should it look like the sum of the slaves? Some
> of this is may be mode-specific, as well.
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists