[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BL0PR04MB6564B4E309188E320AB258E6FC479@BL0PR04MB6564.namprd04.prod.outlook.com>
Date: Wed, 21 Apr 2021 14:13:31 +0000
From: Avri Altman <Avri.Altman@....com>
To: "daejun7.park@...sung.com" <daejun7.park@...sung.com>,
Greg KH <gregkh@...uxfoundation.org>,
"jejb@...ux.ibm.com" <jejb@...ux.ibm.com>,
"martin.petersen@...cle.com" <martin.petersen@...cle.com>,
"asutoshd@...eaurora.org" <asutoshd@...eaurora.org>,
"stanley.chu@...iatek.com" <stanley.chu@...iatek.com>,
"cang@...eaurora.org" <cang@...eaurora.org>,
"bvanassche@....org" <bvanassche@....org>,
"huobean@...il.com" <huobean@...il.com>,
ALIM AKHTAR <alim.akhtar@...sung.com>
CC: "linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
JinHwan Park <jh.i.park@...sung.com>,
Javier Gonzalez <javier.gonz@...sung.com>,
Sung-Jun Park <sungjun07.park@...sung.com>,
Jinyoung CHOI <j-young.choi@...sung.com>,
Dukhyun Kwon <d_hyun.kwon@...sung.com>,
Keoseong Park <keosung.park@...sung.com>,
Jaemyung Lee <jaemyung.lee@...sung.com>,
Jieon Seol <jieon.seol@...sung.com>
Subject: RE: [PATCH v32 4/4] scsi: ufs: Add HPB 2.0 support
> @@ -1653,6 +2148,7 @@ void ufshpb_destroy_lu(struct ufs_hba *hba, struct
> scsi_device *sdev)
>
> ufshpb_cancel_jobs(hpb);
>
> + ufshpb_pre_req_mempool_destroy(hpb);
> ufshpb_destroy_region_tbl(hpb);
>
> kmem_cache_destroy(hpb->map_req_cache);
> @@ -1692,6 +2188,7 @@ static void ufshpb_hpb_lu_prepared(struct ufs_hba
> *hba)
> ufshpb_set_state(hpb, HPB_PRESENT);
> if ((hpb->lu_pinned_end - hpb->lu_pinned_start) > 0)
> queue_work(ufshpb_wq, &hpb->map_work);
> + ufshpb_issue_umap_all_req(hpb);
> } else {
> dev_err(hba->dev, "destroy HPB lu %d\n", hpb->lun);
> ufshpb_destroy_lu(hba, sdev);
Here in lu_prepare, ufshpb_remove can be called without destroy_lu,
and while there are jobs running.
How about calling destroy_lu as part of ufshpb_remove?
Calling it again when __scsi_remove_device, hostdata is already null so it won't matter.
Again, only after we know where all this is going to.
Thanks,
Avri
Powered by blists - more mailing lists