[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <612b5e32-ea11-428e-0c17-e2977185f045@huawei.com>
Date: Wed, 21 Apr 2021 11:34:55 +0800
From: jin yiting <jinyiting@...wei.com>
To: Jay Vosburgh <jay.vosburgh@...onical.com>
CC: <vfalico@...il.com>, <andy@...yhouse.net>, <davem@...emloft.net>,
<kuba@...nel.org>, <netdev@...r.kernel.org>, <security@...nel.org>,
<linux-kernel@...r.kernel.org>, <xuhanbing@...wei.com>,
<wangxiaogang3@...wei.com>
Subject: Re: [PATCH] bonding: 3ad: update slave arr after initialize
在 2021/4/20 13:04, Jay Vosburgh 写道:
> jin yiting <jinyiting@...wei.com> wrote:
> [...]
>>> The described issue is a race condition (in that
>>> ad_agg_selection_logic clears agg->is_active under mode_lock, but
>>> bond_open -> bond_update_slave_arr is inspecting agg->is_active outside
>>> the lock). I don't see how the above change will reliably manage this;
>>> the real issue looks to be that bond_update_slave_arr is committing
>>> changes to the array (via bond_reset_slave_arr) based on a racy
>>> inspection of the active aggregator state while it is in flux.
>>>
>>> Also, the description of the issue says "The best aggregator in
>>> ad_agg_selection_logic has not changed, no need to update slave arr,"
>>> but the change above does the opposite, and will set update_slave_arr
>>> when the aggregator has not changed (update_slave_arr remains false at
>>> return of ad_agg_selection_logic).
>>>
>>> I believe I understand the described problem, but I don't see
>>> how the patch fixes it. I suspect (but haven't tested) that the proper
>>> fix is to acquire mode_lock in bond_update_slave_arr while calling
>>> bond_3ad_get_active_agg_info to avoid conflict with the state machine.
>>>
>>> -J
>>>
>>> ---
>>> -Jay Vosburgh, jay.vosburgh@...onical.com
>>> .
>>>
>>
>> Thank you for your reply. The last patch does have redundant
>> update slave arr.Thank you for your correction.
>>
>> As you said, holding mode_lock in bond_update_slave_arr while
>> calling bond_3ad_get_active_agg_info can avoid conflictwith the state
>> machine. I have tested this patch, with ifdown/ifup operations for bond or
>> slaves.
>>
>> But bond_update_slave_arr is expected to hold RTNL only and NO
>> other lock. And it have WARN_ON(lockdep_is_held(&bond->mode_lock)); in
>> bond_update_slave_arr. I'm not sure that holding mode_lock in
>> bond_update_slave_arr while calling bond_3ad_get_active_agg_info is a
>> correct action.
>
> That WARN_ON came up in discussion recently, and my opinion is
> that it's incorrect, and is trying to insure bond_update_slave_arr is
> safe for a potential sleep when allocating memory.
>
> https://lore.kernel.org/netdev/20210322123846.3024549-1-maximmi@nvidia.com/
>
> The original authors haven't replied, so I would suggest you
> remove the WARN_ON and the surrounding CONFIG_LOCKDEP ifdefs as part of
> your patch and replace it with a call to might_sleep.
>
> The other callers of bond_3ad_get_active_agg_info are generally
> obtaining the state in order to report it to user space, so I think it's
> safe to leave those calls not holding the mode_lock. The race is still
> there, but the data returned to user space is a snapshot and so may
> reflect an incomplete state during a transition. Further, having the
> inspection functions acquire the mode_lock permits user space to spam
> the lock with little effort.
>
> -J
>
>> diff --git a/drivers/net/bonding/bond_main.c
>> b/drivers/net/bonding/bond_main.c
>> index 74cbbb2..db988e5 100644
>> --- a/drivers/net/bonding/bond_main.c
>> +++ b/drivers/net/bonding/bond_main.c
>> @@ -4406,7 +4406,9 @@ int bond_update_slave_arr(struct bonding *bond,
>> struct slave *skipslave)
>> if (BOND_MODE(bond) == BOND_MODE_8023AD) {
>> struct ad_info ad_info;
>>
>> + spin_lock_bh(&bond->mode_lock);
>> if (bond_3ad_get_active_agg_info(bond, &ad_info)) {
>> + spin_unlock_bh(&bond->mode_lock);
>> pr_debug("bond_3ad_get_active_agg_info failed\n");
>> /* No active aggragator means it's not safe to use
>> * the previous array.
>> @@ -4414,6 +4416,7 @@ int bond_update_slave_arr(struct bonding *bond,
>> struct slave *skipslave)
>> bond_reset_slave_arr(bond);
>> goto out;
>> }
>> + spin_unlock_bh(&bond->mode_lock);
>> agg_id = ad_info.aggregator_id;
>> }
>> bond_for_each_slave(bond, slave, iter) {
>
> ---
> -Jay Vosburgh, jay.vosburgh@...onical.com
> .
>
I have remove the WARN_ON and the surrounding CONFIG_LOCKDEP ifdefs in
the new patch and replace it with a call to might_sleep.
And I will send a new patch again.
Thank you for your guidance.
diff --git a/drivers/net/bonding/bond_main.c
b/drivers/net/bonding/bond_main.c
index 74cbbb2..83ef62d 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -4391,9 +4391,7 @@ int bond_update_slave_arr(struct bonding *bond,
struct slave *skipslave)
int agg_id = 0;
int ret = 0;
-#ifdef CONFIG_LOCKDEP
- WARN_ON(lockdep_is_held(&bond->mode_lock));
-#endif
+ might_sleep();
usable_slaves = kzalloc(struct_size(usable_slaves, arr,
bond->slave_cnt), GFP_KERNEL);
@@ -4406,7 +4404,9 @@ int bond_update_slave_arr(struct bonding *bond,
struct slave *skipslave)
if (BOND_MODE(bond) == BOND_MODE_8023AD) {
struct ad_info ad_info;
+ spin_lock_bh(&bond->mode_lock);
if (bond_3ad_get_active_agg_info(bond, &ad_info)) {
+ spin_unlock_bh(&bond->mode_lock);
pr_debug("bond_3ad_get_active_agg_info failed\n");
/* No active aggragator means it's not safe to use
* the previous array.
@@ -4414,6 +4414,7 @@ int bond_update_slave_arr(struct bonding *bond,
struct slave *skipslave)
bond_reset_slave_arr(bond);
goto out;
}
+ spin_unlock_bh(&bond->mode_lock);
agg_id = ad_info.aggregator_id;
}
bond_for_each_slave(bond, slave, iter) {
--
Powered by blists - more mailing lists