[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200918041234.GA3300389@google.com>
Date: Thu, 17 Sep 2020 21:12:34 -0700
From: Jaegeuk Kim <jaegeuk@...nel.org>
To: linux-kernel@...r.kernel.org, linux-scsi@...r.kernel.org,
kernel-team@...roid.com
Cc: Alim Akhtar <alim.akhtar@...sung.com>,
Avri Altman <avri.altman@....com>
Subject: Re: [PATCH 4/6] scsi: ufs: fix LINERESET on hibern8
Please ignore this patch.
Thanks.
On 09/15, Jaegeuk Kim wrote:
> From: Jaegeuk Kim <jaegeuk@...gle.com>
>
> When testing infinite test to read sysfs entries of UFS, I got a UFS timeout
> with the following kernel message.
>
> query: dev_cmd_send: seq_no=78082 tag=31, idn=2
> query: ufshcd_wait_for_dev_cmd: dev_cmd request timedout, tag 31
> query: __ufshcd_query_descriptor: opcode 0x01 for idn 2 failed, index 0, err = -11
> -- hibern8: dme: dme_send: cmd_id=0x17 idn=0
> query: ufshcd_query_descriptor: failed with error -11, retries 3
> -- hibern8: ufshcd_update_uic_error: LINERESET during hibern8 enter
> -- hibern8: __ufshcd_uic_hibern8_enter: hibern8 enter failed. ret = -110
>
> The problem is casued by hibern8 command issued by ufshcd_suspend(), which is
> not aware of query command. If autohibern8 is enabled, we actually don't need
> to issue hibern8 command by suspend.
>
> Cc: Alim Akhtar <alim.akhtar@...sung.com>
> Cc: Avri Altman <avri.altman@....com>
> Signed-off-by: Jaegeuk Kim <jaegeuk@...gle.com>
> ---
> drivers/scsi/ufs/ufshcd.c | 20 ++++++++++++++++++--
> drivers/scsi/ufs/ufshcd.h | 1 +
> 2 files changed, 19 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
> index 848e33ec40639..bdc82cc3824aa 100644
> --- a/drivers/scsi/ufs/ufshcd.c
> +++ b/drivers/scsi/ufs/ufshcd.c
> @@ -3079,8 +3079,12 @@ int ufshcd_query_descriptor_retry(struct ufs_hba *hba,
> int retries;
>
> for (retries = QUERY_REQ_RETRIES; retries > 0; retries--) {
> - err = __ufshcd_query_descriptor(hba, opcode, idn, index,
> + err = -EAGAIN;
> + down_read(&hba->query_lock);
> + if (!ufshcd_is_link_hibern8(hba))
> + err = __ufshcd_query_descriptor(hba, opcode, idn, index,
> selector, desc_buf, buf_len);
> + up_read(&hba->query_lock);
> if (!err || err == -EINVAL)
> break;
> }
> @@ -8263,8 +8267,8 @@ static int ufshcd_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op)
> enum ufs_pm_level pm_lvl;
> enum ufs_dev_pwr_mode req_dev_pwr_mode;
> enum uic_link_state req_link_state;
> + bool need_upwrite = false;
>
> - hba->pm_op_in_progress = 1;
> if (!ufshcd_is_shutdown_pm(pm_op)) {
> pm_lvl = ufshcd_is_runtime_pm(pm_op) ?
> hba->rpm_lvl : hba->spm_lvl;
> @@ -8275,6 +8279,15 @@ static int ufshcd_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op)
> req_link_state = UIC_LINK_OFF_STATE;
> }
>
> + if (ufshcd_is_runtime_pm(pm_op) &&
> + req_link_state == UIC_LINK_HIBERN8_STATE &&
> + hba->capabilities & MASK_AUTO_HIBERN8_SUPPORT) {
> + need_upwrite = true;
> + if (!down_write_trylock(&hba->query_lock))
> + return -EBUSY;
> + }
> + hba->pm_op_in_progress = 1;
> +
> /*
> * If we can't transition into any of the low power modes
> * just gate the clocks.
> @@ -8403,6 +8416,8 @@ static int ufshcd_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op)
> }
>
> hba->pm_op_in_progress = 0;
> + if (need_upwrite)
> + up_write(&hba->query_lock);
>
> if (ret)
> ufshcd_update_reg_hist(&hba->ufs_stats.suspend_err, (u32)ret);
> @@ -8894,6 +8909,7 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
> mutex_init(&hba->dev_cmd.lock);
>
> init_rwsem(&hba->clk_scaling_lock);
> + init_rwsem(&hba->query_lock);
>
> ufshcd_init_clk_gating(hba);
>
> diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
> index 363589c0bd370..6f8e05eaf9661 100644
> --- a/drivers/scsi/ufs/ufshcd.h
> +++ b/drivers/scsi/ufs/ufshcd.h
> @@ -754,6 +754,7 @@ struct ufs_hba {
> bool is_urgent_bkops_lvl_checked;
>
> struct rw_semaphore clk_scaling_lock;
> + struct rw_semaphore query_lock;
> unsigned char desc_size[QUERY_DESC_IDN_MAX];
> atomic_t scsi_block_reqs_cnt;
>
> --
> 2.28.0.618.gf4bc123cb7-goog
Powered by blists - more mailing lists