[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1bc4a73e-b22a-6bad-2583-3a0ffa979414@intel.com>
Date: Tue, 20 Apr 2021 10:42:09 +0300
From: Adrian Hunter <adrian.hunter@...el.com>
To: "Asutosh Das (asd)" <asutoshd@...eaurora.org>, cang@...eaurora.org,
martin.petersen@...cle.com, linux-scsi@...r.kernel.org
Cc: linux-arm-msm@...r.kernel.org,
Alim Akhtar <alim.akhtar@...sung.com>,
Avri Altman <avri.altman@....com>,
"James E.J. Bottomley" <jejb@...ux.ibm.com>,
Krzysztof Kozlowski <krzk@...nel.org>,
Stanley Chu <stanley.chu@...iatek.com>,
Andy Gross <agross@...nel.org>,
Bjorn Andersson <bjorn.andersson@...aro.org>,
Steven Rostedt <rostedt@...dmis.org>,
Ingo Molnar <mingo@...hat.com>,
Matthias Brugger <matthias.bgg@...il.com>,
Lee Jones <lee.jones@...aro.org>,
Bean Huo <beanhuo@...ron.com>,
Kiwoong Kim <kwmad.kim@...sung.com>,
Colin Ian King <colin.king@...onical.com>,
Wei Yongjun <weiyongjun1@...wei.com>,
Yue Hu <huyue2@...ong.com>,
Bart van Assche <bvanassche@....org>,
"Gustavo A. R. Silva" <gustavoars@...nel.org>,
Dinghao Liu <dinghao.liu@....edu.cn>,
Jaegeuk Kim <jaegeuk@...nel.org>,
Satya Tangirala <satyat@...gle.com>,
open list <linux-kernel@...r.kernel.org>,
"moderated list:ARM/SAMSUNG S3C, S5P AND EXYNOS ARM ARCHITECTURES"
<linux-arm-kernel@...ts.infradead.org>,
"open list:ARM/SAMSUNG S3C, S5P AND EXYNOS ARM ARCHITECTURES"
<linux-samsung-soc@...r.kernel.org>,
"moderated list:UNIVERSAL FLASH STORAGE HOST CONTROLLER DRIVER..."
<linux-mediatek@...ts.infradead.org>
Subject: Re: [PATCH v20 1/2] scsi: ufs: Enable power management for wlun
On 20/04/21 7:15 am, Adrian Hunter wrote:
> On 20/04/21 12:53 am, Asutosh Das (asd) wrote:
>> On 4/19/2021 11:37 AM, Adrian Hunter wrote:
>>> On 16/04/21 10:49 pm, Asutosh Das wrote:
>>>>
>>>> Co-developed-by: Can Guo <cang@...eaurora.org>
>>>> Signed-off-by: Can Guo <cang@...eaurora.org>
>>>> Signed-off-by: Asutosh Das <asutoshd@...eaurora.org>
>>>> ---
>>>
>>> I came across 3 issues while testing. See comments below.
>>>
>> Hi Adrian
>> Thanks for the comments.
>>> <SNIP>
>>>
>>>> @@ -5794,7 +5839,7 @@ static void ufshcd_err_handling_unprepare(struct ufs_hba *hba)
>>>> if (ufshcd_is_clkscaling_supported(hba))
>>>> ufshcd_clk_scaling_suspend(hba, false);
>>>> ufshcd_clear_ua_wluns(hba);
>>>
>>> ufshcd_clear_ua_wluns() deadlocks trying to clear UFS_UPIU_RPMB_WLUN
>>> if sdev_rpmb is suspended and sdev_ufs_device is suspending.
>>> e.g. ufshcd_wl_suspend() is waiting on host_sem while ufshcd_err_handler()
>>> is running, at which point sdev_rpmb has already suspended.
>>>
>> Umm, I didn't understand this deadlock.
>> When you say, sdev_rpmb is suspended, does it mean runtime_suspended?
>> sdev_ufs_device is suspending - this can't be runtime_suspending, while ufshcd_err_handling_unprepare is running.
>>
>> If you've a call-stack of this deadlock, please can you share it with me. I'll also try to reproduce this.
>
> Yes it is system suspend. sdev_rpmb has suspended, sdev_ufs_device is waiting on host_sem.
> ufshcd_err_handler() holds host_sem. ufshcd_clear_ua_wlun(UFS_UPIU_RPMB_WLUN) gets stuck.
> I will get some call-stacks.
Here are the call stacks
[ 34.094321] Workqueue: ufs_eh_wq_0 ufshcd_err_handler
[ 34.094788] Call Trace:
[ 34.095281] __schedule+0x275/0x6c0
[ 34.095743] schedule+0x41/0xa0
[ 34.096240] blk_queue_enter+0x10d/0x230
[ 34.096693] ? wait_woken+0x70/0x70
[ 34.097167] blk_mq_alloc_request+0x53/0xc0
[ 34.097610] blk_get_request+0x1e/0x60
[ 34.098053] __scsi_execute+0x3c/0x260
[ 34.098529] ufshcd_clear_ua_wlun.cold+0xa6/0x14b
[ 34.098977] ufshcd_clear_ua_wluns.part.0+0x4d/0x92
[ 34.099456] ufshcd_err_handler+0x97a/0x9ff
[ 34.099902] process_one_work+0x1cc/0x360
[ 34.100384] worker_thread+0x45/0x3b0
[ 34.100851] ? process_one_work+0x360/0x360
[ 34.101308] kthread+0xf6/0x130
[ 34.101728] ? kthread_park+0x80/0x80
[ 34.102186] ret_from_fork+0x1f/0x30
[ 34.640751] task:kworker/u10:9 state:D stack:14528 pid: 255 ppid: 2 flags:0x00004000
[ 34.641253] Workqueue: events_unbound async_run_entry_fn
[ 34.641722] Call Trace:
[ 34.642217] __schedule+0x275/0x6c0
[ 34.642683] schedule+0x41/0xa0
[ 34.643179] schedule_timeout+0x18b/0x290
[ 34.643645] ? del_timer_sync+0x30/0x30
[ 34.644131] __down_timeout+0x6b/0xc0
[ 34.644568] ? ufshcd_clkscale_enable_show+0x20/0x20
[ 34.645014] ? async_schedule_node_domain+0x17d/0x190
[ 34.645496] down_timeout+0x42/0x50
[ 34.645947] ufshcd_wl_suspend+0x79/0xa0
[ 34.646432] ? scmd_printk+0x100/0x100
[ 34.646917] scsi_bus_suspend_common+0x56/0xc0
[ 34.647405] ? scsi_bus_freeze+0x10/0x10
[ 34.647858] dpm_run_callback+0x45/0x110
[ 34.648347] __device_suspend+0x117/0x460
[ 34.648788] async_suspend+0x16/0x90
[ 34.649251] async_run_entry_fn+0x26/0x110
[ 34.649676] process_one_work+0x1cc/0x360
[ 34.650137] worker_thread+0x45/0x3b0
[ 34.650563] ? process_one_work+0x360/0x360
[ 34.650994] kthread+0xf6/0x130
[ 34.651455] ? kthread_park+0x80/0x80
[ 34.651882] ret_from_fork+0x1f/0x30
>
>>
>> I'll address the other comments in the next version.
>>
>>
>> Thank you!
>>
>>>> - pm_runtime_put(hba->dev);
>>>> + ufshcd_rpm_put(hba);
>>>> }
>>>
>>> <SNIP>
>>>
>>>> +void ufshcd_resume_complete(struct device *dev)
>>>> +{
>>
>
Powered by blists - more mailing lists