[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF2d9ji7VqqB95nD=VezL6fAaGJKaHCJRYf+b1VsCCLFy4r7NA@mail.gmail.com>
Date: Wed, 18 Feb 2015 17:30:53 -0800
From: Mahesh Bandewar <maheshb@...gle.com>
To: Nikolay Aleksandrov <nikolay@...hat.com>
Cc: Jay Vosburgh <j.vosburgh@...il.com>,
Andy Gospodarek <andy@...yhouse.net>,
Veaceslav Falico <vfalico@...il.com>,
David Miller <davem@...emloft.net>,
Maciej Zenczykowski <maze@...gle.com>,
netdev <netdev@...r.kernel.org>,
Eric Dumazet <edumazet@...gle.com>
Subject: Re: [PATCH next v5 2/6] bonding: implement bond_poll_controller()
On Wed, Feb 18, 2015 at 5:19 PM, Mahesh Bandewar <maheshb@...gle.com> wrote:
> On Wed, Feb 18, 2015 at 4:10 PM, Nikolay Aleksandrov <nikolay@...hat.com> wrote:
>> On 02/18/2015 11:31 PM, Mahesh Bandewar wrote:
>>> This patches implements the poll_controller support for all
>>> bonding driver. If the slaves have poll_controller net_op defined,
>>> this implementation calls them. This is mode agnostic implementation
>>> and iterates through all slaves (based on mode) and calls respective
>>> handler.
>>>
>>> Signed-off-by: Mahesh Bandewar <maheshb@...gle.com>
>>> ---
>>> v1:
>>> Initial version
>>> v2:
>>> Eliminate bool variable.
>>> v3:
>>> Rebase
>>> v4:
>>> Removed 3AD port_operational check
>>> v5:
>>> Added rtnl protection for bond_for_each_slave()
>>>
>>> drivers/net/bonding/bond_main.c | 33 +++++++++++++++++++++++++++++++++
>>> 1 file changed, 33 insertions(+)
>>>
>>
>> Hi Mahesh,
>> I should've explained more in my review, you cannot sleep in
>> bond_poll_controller() so you cannot acquire rtnl like that. I was thinking
>> more about using rcu and switching to the _rcu version of
>> bond_for_each_slave instead.
>>
> That makes sense. The path that triggered this netpoll() could have
> been holding the rtnl itself and this would be a problem. I think
> using the _rcu variant of the slave iterator is a good idea, my bad!
>
... however we cannot use the _rcu variant either since there is the
netpoll mutex (ni->dev_lock)!
The fact that we are here itself means that something bad had happened
and trying to take additional lock(s) would complicate the situation
further.
>> Cheers,
>> Nik
>>
>>> diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
>>> index b979c265fc51..63e6c0dbe7b3 100644
>>> --- a/drivers/net/bonding/bond_main.c
>>> +++ b/drivers/net/bonding/bond_main.c
>>> @@ -928,6 +928,39 @@ static inline void slave_disable_netpoll(struct slave *slave)
>>>
>>> static void bond_poll_controller(struct net_device *bond_dev)
>>> {
>>> + struct bonding *bond = netdev_priv(bond_dev);
>>> + struct slave *slave = NULL;
>>> + struct list_head *iter;
>>> + struct ad_info ad_info;
>>> + struct netpoll_info *ni;
>>> + const struct net_device_ops *ops;
>>> +
>>> + if (BOND_MODE(bond) == BOND_MODE_8023AD)
>>> + if (bond_3ad_get_active_agg_info(bond, &ad_info))
>>> + return;
>>> +
>>> + rtnl_lock();
>>> + bond_for_each_slave(bond, slave, iter) {
>>> + ops = slave->dev->netdev_ops;
>>> + if (!bond_slave_is_up(slave) || !ops->ndo_poll_controller)
>>> + continue;
>>> +
>>> + if (BOND_MODE(bond) == BOND_MODE_8023AD) {
>>> + struct aggregator *agg =
>>> + SLAVE_AD_INFO(slave)->port.aggregator;
>>> +
>>> + if (agg &&
>>> + agg->aggregator_identifier != ad_info.aggregator_id)
>>> + continue;
>>> + }
>>> +
>>> + ni = rcu_dereference_bh(slave->dev->npinfo);
>>> + if (down_trylock(&ni->dev_lock))
>>> + continue;
>>> + ops->ndo_poll_controller(slave->dev);
>>> + up(&ni->dev_lock);
>>> + }
>>> + rtnl_unlock();
>>> }
>>>
>>> static void bond_netpoll_cleanup(struct net_device *bond_dev)
>>>
>>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists