[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210510102020.090699946@linuxfoundation.org>
Date: Mon, 10 May 2021 12:19:07 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Dick Kennedy <dick.kennedy@...adcom.com>,
James Smart <jsmart2021@...il.com>,
"Martin K. Petersen" <martin.petersen@...cle.com>,
Sasha Levin <sashal@...nel.org>
Subject: [PATCH 5.12 158/384] scsi: lpfc: Fix ADISC handling that never frees nodes
From: James Smart <jsmart2021@...il.com>
[ Upstream commit 309b477462df7542355ac984674a6e89c01c89aa ]
While testing target port swap test with ADISC enabled, several nodes
remain in UNUSED state. These nodes are never freed and rmmod hangs for
long time before finising with "0233 Nodelist not empty" error.
During PLOGI completion lpfc_plogi_confirm_nport() looks for existing nodes
with same WWPN. If found, the existing node is used to continue discovery.
The node on which plogi was performed is freed. When ADISC is enabled, an
ADISC els request is triggered in response to an RSCN. It's possible that
the ADISC may be rejected by the remote port causing the ADISC completion
handler to clear the port and node name in the node. If this occurs, if a
PLOGI is received it causes a node lookup based on wwpn to now fail,
causing the port swap logic to kick in which allocates a new node and swaps
to it. This effectively orphans the original node structure.
Fix the situation by detecting when the lookup fails and forgo the node
swap and node allocation by using the node on which the PLOGI was issued.
Link: https://lore.kernel.org/r/20210301171821.3427-15-jsmart2021@gmail.com
Co-developed-by: Dick Kennedy <dick.kennedy@...adcom.com>
Signed-off-by: Dick Kennedy <dick.kennedy@...adcom.com>
Signed-off-by: James Smart <jsmart2021@...il.com>
Signed-off-by: Martin K. Petersen <martin.petersen@...cle.com>
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
drivers/scsi/lpfc/lpfc_els.c | 33 +++++++--------------------------
1 file changed, 7 insertions(+), 26 deletions(-)
diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
index beb2fcd2d8e7..04c002eea446 100644
--- a/drivers/scsi/lpfc/lpfc_els.c
+++ b/drivers/scsi/lpfc/lpfc_els.c
@@ -1600,7 +1600,7 @@ lpfc_plogi_confirm_nport(struct lpfc_hba *phba, uint32_t *prsp,
struct lpfc_nodelist *new_ndlp;
struct serv_parm *sp;
uint8_t name[sizeof(struct lpfc_name)];
- uint32_t rc, keepDID = 0, keep_nlp_flag = 0;
+ uint32_t keepDID = 0, keep_nlp_flag = 0;
uint32_t keep_new_nlp_flag = 0;
uint16_t keep_nlp_state;
u32 keep_nlp_fc4_type = 0;
@@ -1622,7 +1622,7 @@ lpfc_plogi_confirm_nport(struct lpfc_hba *phba, uint32_t *prsp,
new_ndlp = lpfc_findnode_wwpn(vport, &sp->portName);
/* return immediately if the WWPN matches ndlp */
- if (new_ndlp == ndlp)
+ if (!new_ndlp || (new_ndlp == ndlp))
return ndlp;
if (phba->sli_rev == LPFC_SLI_REV4) {
@@ -1641,30 +1641,11 @@ lpfc_plogi_confirm_nport(struct lpfc_hba *phba, uint32_t *prsp,
(new_ndlp ? new_ndlp->nlp_flag : 0),
(new_ndlp ? new_ndlp->nlp_fc4_type : 0));
- if (!new_ndlp) {
- rc = memcmp(&ndlp->nlp_portname, name,
- sizeof(struct lpfc_name));
- if (!rc) {
- if (active_rrqs_xri_bitmap)
- mempool_free(active_rrqs_xri_bitmap,
- phba->active_rrq_pool);
- return ndlp;
- }
- new_ndlp = lpfc_nlp_init(vport, ndlp->nlp_DID);
- if (!new_ndlp) {
- if (active_rrqs_xri_bitmap)
- mempool_free(active_rrqs_xri_bitmap,
- phba->active_rrq_pool);
- return ndlp;
- }
- } else {
- keepDID = new_ndlp->nlp_DID;
- if (phba->sli_rev == LPFC_SLI_REV4 &&
- active_rrqs_xri_bitmap)
- memcpy(active_rrqs_xri_bitmap,
- new_ndlp->active_rrqs_xri_bitmap,
- phba->cfg_rrq_xri_bitmap_sz);
- }
+ keepDID = new_ndlp->nlp_DID;
+
+ if (phba->sli_rev == LPFC_SLI_REV4 && active_rrqs_xri_bitmap)
+ memcpy(active_rrqs_xri_bitmap, new_ndlp->active_rrqs_xri_bitmap,
+ phba->cfg_rrq_xri_bitmap_sz);
/* At this point in this routine, we know new_ndlp will be
* returned. however, any previous GID_FTs that were done
--
2.30.2
Powered by blists - more mailing lists