lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF1ivSa_hYFu2JXLUOv2k+AhqUZWUvY-uzmocQKh_RUJJUP5Ag@mail.gmail.com>
Date:	Wed, 23 May 2012 16:41:13 +0800
From:	Lin Ming <ming.m.lin@...el.com>
To:	Alan Stern <stern@...land.harvard.edu>
Cc:	Jens Axboe <axboe@...nel.dk>, Jeff Moyer <jmoyer@...hat.com>,
	linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org,
	linux-scsi@...r.kernel.org
Subject: Re: [RFC v2 PATCH 2/4] block: add queue runtime pm callbacks

On Tue, May 22, 2012 at 11:56 PM, Alan Stern <stern@...land.harvard.edu> wrote:
> On Tue, 22 May 2012, Lin Ming wrote:
>
>> > (Or maybe it would be easier to make q->rpm_status be a pointer to
>> > q->dev->power.rpm_status.  That way, if CONFIG_PM_RUNTIME isn't enabled
>> > or block_runtime_pm_init() hasn't been called, you can have
>> > q->rpm_status simply point to a static value that is permanently set to
>> > RPM_ACTIVE.)
>>
>> I think we need q->rpm_status.
>> Block layer check ->rpm_status and client driver set this status.
>
> No, the client driver should not have to set any status values.  The
> client driver should do as little as possible.

Ah, I mean runtime pm core will set the status, not the client driver.

>
>> And the status is synchronized with queue's spin lock.
>
> Right, and the client driver should not need to acquire the queue's
> lock.
>
>> If we use q->dev->power.rpm_status then how to sync it between block
>> layer and client driver?
>> Do you mean block layer will need to acquire q->dev->power.lock?
>
> That's not what I mean.
>
> What synchronization are you concerned about?  The most important race

Let's consider below code.

@@ -587,6 +591,11 @@ void __elv_add_request(struct request_queue *q,
struct request *rq, int where)
 {
       trace_block_rq_insert(q, rq);

+       if (!(rq->cmd_flags & REQ_PM))
+               if (q->nr_pending++ == 0 && (q->rpm_status == RPM_SUSPENDED ||
+                               q->rpm_status == RPM_SUSPENDING) && q->dev)
+                       pm_request_resume(q->dev);
+
       rq->q = q;

       if (rq->cmd_flags & REQ_SOFTBARRIER) {

Block layer reads runtime status and pm core writes this status.
PM core uses dev->power.lock to protect this status.

I was thinking will it have problem if block layer does not acquire
dev->power.lock?
>From your explanation below, it seems does not have problem.

Thanks,
Lin Ming

> seems to be when a new request is added to the queue at the same time
> as a runtime suspend begins.
>
> If q->dev->power.rpm_status has already been set to RPM_SUSPENDING or
> RPM_SUSPENDED when the request is submitted, the block layer should
> call pm_runtime_request_resume().  Thus if the suspend succeeds, the
> device will be resumed immediately afterward.  And if the suspend
> fails, the new request will be handled as usual (note that the
> block_*_runtime_* routines might need to kick-start the queue to get it
> going again).
>
> Alternatively, if q->dev->power.rpm_status is still equal to RPM_ACTIVE
> when the request is submitted, the block layer will simply accept the
> request.  If the request is still on the queue when
> block_pre_runtime_suspend is called, it will return -EBUSY and the
> suspend will fail.
>
> The only synchronization needed to make this work is that the
> block_{pre,post}_runtime_suspend routines need to hold the queue's lock
> while checking to see if any requests are in the queue.  You'd expect
> that anyway.
>
> Alan Stern
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ