[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <47d9f710-59f7-0ccc-d41b-ee7ee0f69017@nvidia.com>
Date: Wed, 28 Jul 2021 10:34:35 +0300
From: Nikolay Aleksandrov <nikolay@...dia.com>
To: Yufeng Mo <moyufeng@...wei.com>, davem@...emloft.net,
kuba@...nel.org, jay.vosburgh@...onical.com, jiri@...nulli.us
Cc: netdev@...r.kernel.org, shenjian15@...wei.com,
lipeng321@...wei.com, yisen.zhuang@...wei.com,
linyunsheng@...wei.com, zhangjiaran@...wei.com,
huangguangbin2@...wei.com, chenhao288@...ilicon.com,
salil.mehta@...wei.com, linuxarm@...wei.com, linuxarm@...neuler.org
Subject: Re: [PATCH net-next] bonding: 3ad: fix the concurrency between
__bond_release_one() and bond_3ad_state_machine_handler()
On 28/07/2021 09:19, Yufeng Mo wrote:
> Some time ago, I reported a calltrace issue
> "did not find a suitable aggregator", please see[1].
> After a period of analysis and reproduction, I find
> that this problem is caused by concurrency.
>
> Before the problem occurs, the bond structure is like follows:
>
> bond0 - slaver0(eth0) - agg0.lag_ports -> port0 - port1
> \
> port0
> \
> slaver1(eth1) - agg1.lag_ports -> NULL
> \
> port1
>
> If we run 'ifenslave bond0 -d eth1', the process is like below:
>
> excuting __bond_release_one()
> |
> bond_upper_dev_unlink()[step1]
> | | |
> | | bond_3ad_lacpdu_recv()
> | | ->bond_3ad_rx_indication()
> | | spin_lock_bh()
> | | ->ad_rx_machine()
> | | ->__record_pdu()[step2]
> | | spin_unlock_bh()
> | | |
> | bond_3ad_state_machine_handler()
> | spin_lock_bh()
> | ->ad_port_selection_logic()
> | ->try to find free aggregator[step3]
> | ->try to find suitable aggregator[step4]
> | ->did not find a suitable aggregator[step5]
> | spin_unlock_bh()
> | |
> | |
> bond_3ad_unbind_slave() |
> spin_lock_bh()
> spin_unlock_bh()
>
> step1: already removed slaver1(eth1) from list, but port1 remains
> step2: receive a lacpdu and update port0
> step3: port0 will be removed from agg0.lag_ports. The struct is
> "agg0.lag_ports -> port1" now, and agg0 is not free. At the
> same time, slaver1/agg1 has been removed from the list by step1.
> So we can't find a free aggregator now.
> step4: can't find suitable aggregator because of step2
> step5: cause a calltrace since port->aggregator is NULL
>
> To solve this concurrency problem, the range of bond->mode_lock
> is extended from only bond_3ad_unbind_slave() to both
> bond_upper_dev_unlink() and bond_3ad_unbind_slave().
>
> [1]https://lore.kernel.org/netdev/10374.1611947473@famine/
>
> Signed-off-by: Yufeng Mo <moyufeng@...wei.com>
> Acked-by: Jay Vosburgh <jay.vosburgh@...onical.com>
> ---
> drivers/net/bonding/bond_3ad.c | 7 +------
> drivers/net/bonding/bond_main.c | 6 +++++-
> 2 files changed, 6 insertions(+), 7 deletions(-)
>
[snip]
> /**
> diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
> index 0ff7567..deb019e 100644
> --- a/drivers/net/bonding/bond_main.c
> +++ b/drivers/net/bonding/bond_main.c
> @@ -2129,14 +2129,18 @@ static int __bond_release_one(struct net_device *bond_dev,
> /* recompute stats just before removing the slave */
> bond_get_stats(bond->dev, &bond->bond_stats);
>
> - bond_upper_dev_unlink(bond, slave);
> /* unregister rx_handler early so bond_handle_frame wouldn't be called
> * for this slave anymore.
> */
> netdev_rx_handler_unregister(slave_dev);
>
> + /* Sync against bond_3ad_state_machine_handler() */
> + spin_lock_bh(&bond->mode_lock);
> + bond_upper_dev_unlink(bond, slave);
this calls netdev_upper_dev_unlink() which calls call_netdevice_notifiers_info() for
NETDEV_PRECHANGEUPPER and NETDEV_CHANGEUPPER, both of which are allowed to sleep so you
cannot hold the mode lock
after netdev_rx_handler_unregister() the bond's recv_probe cannot be executed
so you don't really need to unlink it under mode_lock or move mode_lock at all
> if (BOND_MODE(bond) == BOND_MODE_8023AD)
> bond_3ad_unbind_slave(slave);
> + spin_unlock_bh(&bond->mode_lock);
>
> if (bond_mode_can_use_xmit_hash(bond))
> bond_update_slave_arr(bond, slave);
>
Powered by blists - more mailing lists