lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <201208132156.48942.rjw@sisk.pl>
Date:	Mon, 13 Aug 2012 21:56:48 +0200
From:	"Rafael J. Wysocki" <rjw@...k.pl>
To:	Alan Stern <stern@...land.harvard.edu>
Cc:	Linux PM list <linux-pm@...r.kernel.org>,
	Ming Lei <tom.leiming@...il.com>,
	LKML <linux-kernel@...r.kernel.org>,
	"Greg Kroah-Hartman" <gregkh@...uxfoundation.org>
Subject: Re: [PATCH][Alternative][RFC] PM / Runtime: Introduce driver runtime PM work routine

On Monday, August 13, 2012, Alan Stern wrote:
> On Mon, 13 Aug 2012, Rafael J. Wysocki wrote:
> 
> > > __pm_runtime_barrier() has never made very strong guarantees about 
> > > runtime_resume callbacks.  But the kerneldoc does claim that any 
> > > pending callbacks will have been completed, and that claim evidently is 
> > > violated in the rpm_resume(parent,0) case.
> > 
> > "Flush all pending requests for the device from pm_wq and wait for all
> > runtime PM operations involving the device in progress to complete."
> > 
> > It doesn't mention the parent.
> 
> You're missing the point.  Suppose you do an async resume and while the
> workqueue routine is executing pm_resume(parent,0), another thread
> calls pm_runtime_barrier() for the same device.  The barrier will
> return more or less immediately, even though there is a runtime PM
> operation involving the device (that is, the async resume) still in
> progress.  The rpm_resume() routine was running before
> pm_runtime_barrier() was called and will still be running afterward.

I see what you mean now.

> > But I agree, it's not very clear.
> > 
> > > Maybe the kerneldoc needs to be changed, or maybe we need to fix the code.
> > 
> > If anything, I'd change the kerneldoc.  The code pretty much has to be
> > what it is in this respect.
> 
> I'm not sure what guarantees pm_runtime_barrier() _can_ make about
> runtime_resume callbacks.  If you call that routine while the device is
> suspended then a runtime_resume callback could occur at any moment,
> because userspace can write "on" to the power/control attribute
> whenever it wants to.
> 
> I guess the best we can say is that if you call pm_runtime_barrier()  
> after updating the dev_pm_ops method pointers then after the barrier
> returns, the old method pointers will not be invoked and the old method
> routines will not be running.  So we need an equivalent guarantee with
> regard to the pm_runtime_work pointer.  (Yes, we could use a better 
> name for that pointer.)
> 
> Which means the code in the patch isn't quite right, because it saves 
> the pm_runtime_work pointer before calling rpm_resume().  Maybe we 
> should avoid looking at the pointer until rpm_resume() returns.

Yes, we can do that.

Alternatively, we can set power.work_in_progress before calling
rpm_resume(dev, 0) (i.e. regard the resume as a part of the work) to make
the barrier wait for all of it to complete.

> > I think that it's better to reorder the checks so that the final ordering is:
> > 
> > * check power.no_callbacks and parent status
> > * check RPM_RUN_WORK
> > * check RPM_RESUMING || RPM_SUSPENDING
> > * check RPM_ASYNC
> > 
> > so that we don't schedule the execution of pm_runtime_work() if
> > power.no_callbacks is set and the parent is active and we still do the
> > power.deferred_resume optimization if RPM_RUN_WORK is unset.
> 
> That seems reasonable.

OK

Thanks,
Rafael
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ