[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <X/awxP3m1VG3b+bX@google.com>
Date: Wed, 6 Jan 2021 22:57:08 -0800
From: Jaegeuk Kim <jaegeuk@...nel.org>
To: Can Guo <cang@...eaurora.org>
Cc: linux-kernel@...r.kernel.org, linux-scsi@...r.kernel.org,
kernel-team@...roid.com, alim.akhtar@...sung.com,
avri.altman@....com, bvanassche@....org,
martin.petersen@...cle.com, stanley.chu@...iatek.com
Subject: Re: [PATCH v3 1/2] scsi: ufs: fix livelock of ufshcd_clear_ua_wluns
On 01/07, Can Guo wrote:
> On 2021-01-07 05:41, Jaegeuk Kim wrote:
> > When gate_work/ungate_work gets an error during hibern8_enter or exit,
> > ufshcd_err_handler()
> > ufshcd_scsi_block_requests()
> > ufshcd_reset_and_restore()
> > ufshcd_clear_ua_wluns() -> stuck
> > ufshcd_scsi_unblock_requests()
> >
> > In order to avoid it, ufshcd_clear_ua_wluns() can be called per recovery
> > flows
> > such as suspend/resume, link_recovery, and error_handler.
> >
> > Fixes: 1918651f2d7e ("scsi: ufs: Clear UAC for RPMB after ufshcd
> > resets")
> > Signed-off-by: Jaegeuk Kim <jaegeuk@...nel.org>
> > ---
> > drivers/scsi/ufs/ufshcd.c | 15 ++++++++++-----
> > 1 file changed, 10 insertions(+), 5 deletions(-)
> >
> > diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
> > index bedb822a40a3..1678cec08b51 100644
> > --- a/drivers/scsi/ufs/ufshcd.c
> > +++ b/drivers/scsi/ufs/ufshcd.c
> > @@ -3996,6 +3996,8 @@ int ufshcd_link_recovery(struct ufs_hba *hba)
> > if (ret)
> > dev_err(hba->dev, "%s: link recovery failed, err %d",
> > __func__, ret);
> > + else
> > + ufshcd_clear_ua_wluns(hba);
>
> Can we put it right after ufshcd_scsi_add_wlus() in ufshcd_add_lus()?
May I ask the reason? We'll call it after ufshcd_add_lus() later tho.
>
> Thanks,
> Can Guo.
>
> >
> > return ret;
> > }
> > @@ -6003,6 +6005,9 @@ static void ufshcd_err_handler(struct work_struct
> > *work)
> > ufshcd_scsi_unblock_requests(hba);
> > ufshcd_err_handling_unprepare(hba);
> > up(&hba->eh_sem);
> > +
> > + if (!err && needs_reset)
> > + ufshcd_clear_ua_wluns(hba);
> > }
> >
> > /**
> > @@ -6940,14 +6945,11 @@ static int
> > ufshcd_host_reset_and_restore(struct ufs_hba *hba)
> > ufshcd_set_clk_freq(hba, true);
> >
> > err = ufshcd_hba_enable(hba);
> > - if (err)
> > - goto out;
> >
> > /* Establish the link again and restore the device */
> > - err = ufshcd_probe_hba(hba, false);
> > if (!err)
> > - ufshcd_clear_ua_wluns(hba);
> > -out:
> > + err = ufshcd_probe_hba(hba, false);
> > +
> > if (err)
> > dev_err(hba->dev, "%s: Host init failed %d\n", __func__, err);
> > ufshcd_update_evt_hist(hba, UFS_EVT_HOST_RESET, (u32)err);
> > @@ -8777,6 +8779,7 @@ static int ufshcd_suspend(struct ufs_hba *hba,
> > enum ufs_pm_op pm_op)
> > ufshcd_resume_clkscaling(hba);
> > hba->clk_gating.is_suspended = false;
> > hba->dev_info.b_rpm_dev_flush_capable = false;
> > + ufshcd_clear_ua_wluns(hba);
> > ufshcd_release(hba);
> > out:
> > if (hba->dev_info.b_rpm_dev_flush_capable) {
> > @@ -8887,6 +8890,8 @@ static int ufshcd_resume(struct ufs_hba *hba,
> > enum ufs_pm_op pm_op)
> > cancel_delayed_work(&hba->rpm_dev_flush_recheck_work);
> > }
> >
> > + ufshcd_clear_ua_wluns(hba);
> > +
> > /* Schedule clock gating in case of no access to UFS device yet */
> > ufshcd_release(hba);
Powered by blists - more mailing lists