[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210426.120822.232032630973964712.davem@davemloft.net>
Date: Mon, 26 Apr 2021 12:08:22 -0700 (PDT)
From: David Miller <davem@...emloft.net>
To: jay.vosburgh@...onical.com
Cc: jinyiting@...wei.com, vfalico@...il.com, andy@...yhouse.net,
kuba@...nel.org, netdev@...r.kernel.org, security@...nel.org,
linux-kernel@...r.kernel.org, xuhanbing@...wei.com,
wangxiaogang3@...wei.com
Subject: Re: [PATCH] bonding: 3ad: Fix the conflict between
bond_update_slave_arr and the state machine
From: Jay Vosburgh <jay.vosburgh@...onical.com>
Date: Mon, 26 Apr 2021 08:22:37 -0700
> David Miller <davem@...emloft.net> wrote:
>
>>From: jinyiting <jinyiting@...wei.com>
>>Date: Wed, 21 Apr 2021 16:38:21 +0800
>>
>>> The bond works in mode 4, and performs down/up operations on the bond
>>> that is normally negotiated. The probability of bond-> slave_arr is NULL
>>>
>>> Test commands:
>>> ifconfig bond1 down
>>> ifconfig bond1 up
>>>
>>> The conflict occurs in the following process:
>>>
>>> __dev_open (CPU A)
>>> --bond_open
>>> --queue_delayed_work(bond->wq,&bond->ad_work,0);
>>> --bond_update_slave_arr
>>> --bond_3ad_get_active_agg_info
>>>
>>> ad_work(CPU B)
>>> --bond_3ad_state_machine_handler
>>> --ad_agg_selection_logic
>>>
>>> ad_work runs on cpu B. In the function ad_agg_selection_logic, all
>>> agg->is_active will be cleared. Before the new active aggregator is
>>> selected on CPU B, bond_3ad_get_active_agg_info failed on CPU A,
>>> bond->slave_arr will be set to NULL. The best aggregator in
>>> ad_agg_selection_logic has not changed, no need to update slave arr.
>>>
>>> The conflict occurred in that ad_agg_selection_logic clears
>>> agg->is_active under mode_lock, but bond_open -> bond_update_slave_arr
>>> is inspecting agg->is_active outside the lock.
>>>
>>> Also, bond_update_slave_arr is normal for potential sleep when
>>> allocating memory, so replace the WARN_ON with a call to might_sleep.
>>>
>>> Signed-off-by: jinyiting <jinyiting@...wei.com>
>>> ---
>>>
>>> Previous versions:
>>> * https://lore.kernel.org/netdev/612b5e32-ea11-428e-0c17-e2977185f045@huawei.com/
>>>
>>> drivers/net/bonding/bond_main.c | 7 ++++---
>>> 1 file changed, 4 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
>>> index 74cbbb2..83ef62d 100644
>>> --- a/drivers/net/bonding/bond_main.c
>>> +++ b/drivers/net/bonding/bond_main.c
>>> @@ -4406,7 +4404,9 @@ int bond_update_slave_arr(struct bonding *bond, struct slave *skipslave)
>>> if (BOND_MODE(bond) == BOND_MODE_8023AD) {
>>> struct ad_info ad_info;
>>>
>>> + spin_lock_bh(&bond->mode_lock);
>>
>>The code paths that call this function with mode_lock held will now deadlock.
>
> No path should be calling bond_update_slave_arr with mode_lock
> already held (it expects RTNL only); did you find one?
>
> My concern is that there's something else that does the opposite
> order, i.e., mode_lock first, then RTNL, but I haven't found an example.
>
This patch is removing a lockdep assertion masking sure that mode_lock was held
when this function was called. That should have been triggering all the time, right?
Thanks.
Powered by blists - more mailing lists