[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1267347938.9082.60.camel@edumazet-laptop>
Date: Sun, 28 Feb 2010 10:05:38 +0100
From: Eric Dumazet <eric.dumazet@...il.com>
To: "\"Oleg A. Arkhangelsky\"" <sysoleg@...dex.ru>
Cc: netdev@...r.kernel.org
Subject: Re: Re: [PATCH net-next-2.6 1/2] mq: support for bonding
Le dimanche 28 février 2010 à 11:25 +0300, "Oleg A. Arkhangelsky" a
écrit :
> Hi Eric,
>
> 27.02.10, 17:29, "Eric Dumazet" <eric.dumazet@...il.com>:
>
> > My dev machine has 16 cpus, and my network card has 16 queues per NIC
>
> We should borrow this number from real device when enslaving it, picking
> maximum value from among slaves. But the main problem is that we don't
> known anything about slaves in bond_create() and there is no way to change
> number of tx queues later. Maybe we could solve this by adding new module
> parameter for bonding (num_tx_queues)?
>
It would be pretty hard to dynamically adjust number of txqueues
dynamically. Only current choice would be a sysfs parameter, to size
subsequent bond_create() queues, and also visible as a module parameter
so that implicit bond devices have the right number of queues.
> > Every xmit has to get this lock(s) and performance is not optimal.
>
> You're right. I don't notice it. I see two solutions:
>
> 1) Convert all rw_locks to RCU mechanism
> 2) Use plain array instead of linked list to store list of slaves. In this case we
> don't need to lock when doing bond_for_each_slave().
>
Best thing would be RCU of course, at least for the active/backup mode.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists