lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF1ivSYVznOU43XT9_5d-EMNp1TQwBiYfG=PL6J1KPdmQx9pgw@mail.gmail.com>
Date:	Tue, 22 May 2012 14:12:58 +0800
From:	Lin Ming <ming.m.lin@...el.com>
To:	Alan Stern <stern@...land.harvard.edu>
Cc:	Jens Axboe <axboe@...nel.dk>, Jeff Moyer <jmoyer@...hat.com>,
	linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org,
	linux-scsi@...r.kernel.org
Subject: Re: [RFC v2 PATCH 2/4] block: add queue runtime pm callbacks

On Fri, May 18, 2012 at 2:29 AM, Alan Stern <stern@...land.harvard.edu> wrote:
> On Thu, 17 May 2012, Lin Ming wrote:
>
>> Add runtime pm suspend/resume callbacks to request queue.
>> As an example, implement these callbacks in sd driver.
>
> This is not the way to do it.  The block subsystem should not use
> suspend/resume callbacks.
>
> Instead, there should be block functions that can be called by client
> drivers: block_pre_runtime_suspend, block_post_runtime_suspend,
> bock_pre_runtime_resume, and block_post_runtime_resume.
>
> They should do something like this:
>
>        block_pre_runtime_suspend:
>                If any requests are in the queue, return -EBUSY.
>                Otherwise set q->rpm_status to RPM_SUSPENDING and
>                return 0.
>
>        block_post_runtime_suspend:
>                If the suspend succeeded then set q->rpm_status to
>                RPM_SUSPENDED.  Otherwise set it to RPM_ACTIVE and
>                call pm_runtime_mark_last_busy().
>
>        block_pre_runtime_resume:
>                Set q->rpm_status to RPM_RESUMING.
>
>        block_post_runtime_resume:
>                If the resume succeeded then set q->rpm_status to
>                RPM_ACTIVE and call pm_runtime_mark_last_busy() and
>                pm_runtime_request_autosuspend().
>                Otherwise set q->rpm_status to RPM_SUSPENDED.
>
> There should also be an initialization function for client drivers to
> call.  block_runtime_pm_init() should call pm_runtime_mark_last_busy(),
> pm_runtime_use_autosuspend(), and pm_runtime_autosuspend().
>
> Next, you have to modify the parts of the block layer that run when a
> new request is added to the queue or a request is removed.
>
>        When a request is added:
>                If q->rpm_status is RPM_SUSPENDED, or if q->rpm_status
>                is RPM_SUSPENDING and the REQ_PM flag isn't set, call
>                pm_runtime_request_resume().
>
>        When a request finishes:
>                Call pm_runtime_mark_last_busy().
>
> Next, you have to change the parts of the block layer responsible for
> taking a request from the queue and handing it to the lower-level
> driver (both peek and get).  If q->rpm_status is RPM_SUSPENDED, they
> shouldn't do anything -- act as though the queue is empty.  If
> q->rpm_status is RPM_SUSPENDING or RPM_RESUMING, they should hand over
> the request only if it has the REQ_PM flag set.
>
> For this to work, the block layer has to know what struct device
> pointer to pass to the pm_runtime_* routines.  You'll have to add that
> information to the request_queue structure; I guess q->dev can get set
> by block_pm_runtime_init().  In fact, when that's done you won't need
> q->rpm_status any more.  You'll be able to use c
> directly, and you won't have to update it because the PM core does that
> for you.
>
> (Or maybe it would be easier to make q->rpm_status be a pointer to
> q->dev->power.rpm_status.  That way, if CONFIG_PM_RUNTIME isn't enabled
> or block_runtime_pm_init() hasn't been called, you can have
> q->rpm_status simply point to a static value that is permanently set to
> RPM_ACTIVE.)

I think we need q->rpm_status.
Block layer check ->rpm_status and client driver set this status.
And the status is synchronized with queue's spin lock.

If we use q->dev->power.rpm_status then how to sync it between block
layer and client driver?
Do you mean block layer will need to acquire q->dev->power.lock?

Lin Ming

>
> I may have left some parts out from this brief description.  Hopefully
> you'll be able to figure out the general idea and get it to work.
>
> Alan Stern
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ