[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3bfa692ce706c5c198f565e674afb56f@codeaurora.org>
Date: Fri, 01 May 2020 13:12:17 +0800
From: Can Guo <cang@...eaurora.org>
To: Bart Van Assche <bvanassche@....org>
Cc: asutoshd@...eaurora.org, nguyenb@...eaurora.org,
hongwus@...eaurora.org, rnayak@...eaurora.org,
stanley.chu@...iatek.com, alim.akhtar@...sung.com,
beanhuo@...ron.com, Avri.Altman@....com,
bjorn.andersson@...aro.org, linux-scsi@...r.kernel.org,
kernel-team@...roid.com, saravanak@...gle.com, salyzyn@...gle.com,
"James E.J. Bottomley" <jejb@...ux.ibm.com>,
"Martin K. Petersen" <martin.petersen@...cle.com>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v3 1/1] scsi: pm: Balance pm_only counter of request queue
during system resume
On 2020-05-01 09:50, Bart Van Assche wrote:
> On 2020-04-30 18:42, Can Guo wrote:
>> On 2020-05-01 04:32, Bart Van Assche wrote:
>> > Has it been considered to test directly whether a SCSI device has been
>> > runtime suspended instead of relying on blk_queue_pm_only()? How about
>> > using pm_runtime_status_suspended() or adding a function in
>> > block/blk-pm.h that checks whether q->rpm_status == RPM_SUSPENDED?
>>
>> Yes, I used to make the patch like that way, and it also worked well,
>> as
>> both ways are equal actually. I kinda like the current code because we
>> should be confident that after scsi_dev_type_resume() returns, pm_only
>> must be 0. Different reviewers may have different opinions, either way
>> works well anyways.
>
> Hi Can,
>
> Please note that this is not a matter of personal preferences of a
> reviewer but a matter of correctness. blk_queue_pm_only() does not only
> return a value > 0 if a SCSI device has been runtime suspended but also
> returns true if scsi_device_quiesce() was called for another reason.
> Hence my request to test the "runtime suspended" status directly and
> not
> to rely on blk_queue_pm_only().
>
> Thanks,
>
> Bart.
Hi Bart,
I agree we are pursuing correctness here, but as I said, I think both
way are equally correct. I also agree with you that the alternative way,
see [2], is much easier to be understood, we can take the alternative
way
if you are OK with it.
[1] Currently, scsi_dev_type_resume() is the hooker for resume, thaw and
restore. Per my understanding, when scsi_dev_type_resume() is running,
it is not possible that scsi_device_quiesce() can be called to this
sdev,
at least not possible in current code base. So it is OK to rely on
blk_queue_pm_only() in scsi_dev_type_resume().
[2] The alternative way which I have tested with is like below. I think
it is what you requested for if my understanding is right, please
correct
me if I am wrong.
diff --git a/drivers/scsi/scsi_pm.c b/drivers/scsi/scsi_pm.c
index 3717eea..d18271d 100644
--- a/drivers/scsi/scsi_pm.c
+++ b/drivers/scsi/scsi_pm.c
@@ -74,12 +74,15 @@ static int scsi_dev_type_resume(struct device *dev,
{
const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm :
NULL;
int err = 0;
+ bool was_rpm_suspended = false;
err = cb(dev, pm);
scsi_device_resume(to_scsi_device(dev));
dev_dbg(dev, "scsi resume: %d\n", err);
if (err == 0) {
+ was_rpm_suspended = pm_runtime_suspended(dev);
+
pm_runtime_disable(dev);
err = pm_runtime_set_active(dev);
pm_runtime_enable(dev);
@@ -93,8 +96,10 @@ static int scsi_dev_type_resume(struct device *dev,
*/
if (!err && scsi_is_sdev_device(dev)) {
struct scsi_device *sdev = to_scsi_device(dev);
-
- blk_set_runtime_active(sdev->request_queue);
+ if (was_rpm_suspended)
+
blk_post_runtime_resume(sdev->request_queue, 0);
+ else
+
blk_set_runtime_active(sdev->request_queue);
}
}
Thanks,
Can Guo
Powered by blists - more mailing lists