[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220115102947.GB13341@linux.alibaba.com>
Date: Sat, 15 Jan 2022 18:29:47 +0800
From: "dust.li" <dust.li@...ux.alibaba.com>
To: Wen Gu <guwen@...ux.alibaba.com>, kgraul@...ux.ibm.com,
davem@...emloft.net, kuba@...nel.org
Cc: linux-s390@...r.kernel.org, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH net] net/smc: Fix hung_task when removing SMC-R devices
On Fri, Jan 14, 2022 at 09:37:24PM +0800, Wen Gu wrote:
>A hung_task is observed when removing SMC-R devices. Suppose that
>a link group has two active links(lnk_A, lnk_B) associated with two
>different SMC-R devices(dev_A, dev_B). When dev_A is removed, the
>link group will be removed from smc_lgr_list and added into
>lgr_linkdown_list. lnk_A will be cleared and smcibdev(A)->lnk_cnt
>will reach to zero. However, when dev_B is removed then, the link
>group can't be found in smc_lgr_list and lnk_B won't be cleared,
>making smcibdev->lnk_cnt never reaches zero, which causes a hung_task.
>
>This patch fixes this issue by restoring the implementation of
>smc_smcr_terminate_all() to what it was before commit 349d43127dac
>("net/smc: fix kernel panic caused by race of smc_sock"). The original
>implementation also satisfies the intention that make sure QP destroy
>earlier than CQ destroy because we will always wait for smcibdev->lnk_cnt
>reaches zero, which guarantees QP has been destroyed.
Good catch, thank you !
Update the comments of smc_smcr_terminate_all as well ?
>
>Fixes: 349d43127dac ("net/smc: fix kernel panic caused by race of smc_sock")
>Signed-off-by: Wen Gu <guwen@...ux.alibaba.com>
Reviewed-by: Dust Li <dust.li@...ux.alibaba.com>
>---
> net/smc/smc_core.c | 13 +------------
> 1 file changed, 1 insertion(+), 12 deletions(-)
>
>diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
>index b19c0aa..1124594 100644
>--- a/net/smc/smc_core.c
>+++ b/net/smc/smc_core.c
>@@ -1533,7 +1533,6 @@ void smc_smcr_terminate_all(struct smc_ib_device *smcibdev)
> {
> struct smc_link_group *lgr, *lg;
> LIST_HEAD(lgr_free_list);
>- LIST_HEAD(lgr_linkdown_list);
> int i;
>
> spin_lock_bh(&smc_lgr_list.lock);
>@@ -1545,7 +1544,7 @@ void smc_smcr_terminate_all(struct smc_ib_device *smcibdev)
> list_for_each_entry_safe(lgr, lg, &smc_lgr_list.list, list) {
> for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) {
> if (lgr->lnk[i].smcibdev == smcibdev)
>- list_move_tail(&lgr->list, &lgr_linkdown_list);
>+ smcr_link_down_cond_sched(&lgr->lnk[i]);
> }
> }
> }
>@@ -1557,16 +1556,6 @@ void smc_smcr_terminate_all(struct smc_ib_device *smcibdev)
> __smc_lgr_terminate(lgr, false);
> }
>
>- list_for_each_entry_safe(lgr, lg, &lgr_linkdown_list, list) {
>- for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) {
>- if (lgr->lnk[i].smcibdev == smcibdev) {
>- mutex_lock(&lgr->llc_conf_mutex);
>- smcr_link_down_cond(&lgr->lnk[i]);
>- mutex_unlock(&lgr->llc_conf_mutex);
>- }
>- }
>- }
>-
> if (smcibdev) {
> if (atomic_read(&smcibdev->lnk_cnt))
> wait_event(smcibdev->lnks_deleted,
>--
>1.8.3.1
Powered by blists - more mailing lists