lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.44L0.1005291056490.31946-100000@netrider.rowland.org>
Date:	Sat, 29 May 2010 11:03:24 -0400 (EDT)
From:	Alan Stern <stern@...land.harvard.edu>
To:	Brian Swetland <swetland@...gle.com>
cc:	Florian Mickler <florian@...kler.org>,
	Peter Zijlstra <peterz@...radead.org>,
	"Rafael J. Wysocki" <rjw@...k.pl>,
	Linux PM <linux-pm@...ts.linux-foundation.org>,
	Arve Hjønnevåg <arve@...roid.com>,
	Alan Cox <alan@...rguk.ukuu.org.uk>,
	Matthew Garrett <mjg59@...f.ucam.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	LKML <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...e.hu>
Subject: Re: [linux-pm] [PATCH 0/8] Suspend block api (version 8)

On Sat, 29 May 2010, Brian Swetland wrote:

> On Sat, May 29, 2010 at 7:10 AM, Alan Stern <stern@...land.harvard.edu> wrote:

> > If no such constraints are active, the QoS-based suspend blocks in an
> > interruptible wait until the number of active QOS_EVENTUALLY
> > constraints drops to 0.  When that happens, it carries out a normal
> > suspend-to-RAM -- except that it checks along the way to make sure that
> > no new QoS constraints are activated while the suspend is in progress.
> > If they are, the PM core backs out and fails the QoS-based suspend.
> >
> > Userspace suspend blockers don't exist at all, as far as the kernel is
> > concerned.  In their place, the Android runs a power-manager program
> > that receives IPC requests from other processes when they need to
> > prevent the system from suspending or allow it to suspend.  The power
> > manager's main loop looks like this:
> >
> >        for (;;) {
> >                while (any IPC requests remain)
> >                        handle them;
> >                if (any processes need to prevent suspend)
> >                        sleep;
> >                else
> >                        write "qos" to /sys/power/state;
> >        }
> 
> The issue with this approach is that if userspace wants to suspend
> while a driver is holding a QOS_EVENTUALLY constraint, it's basically
> going to spin constantly writing "qos" and failing.

No, no.  If userspace wants to suspend while a driver is holding a 
QOS_EVENTUALLY constraint, the user process blocks in an interruptible 
wait state as described in the first paragraph above.

> Could we have write(powerstate_fd, "qos",3) block until all
> QOS_EVENTUALLY constraints are lifted or the system successfully
> suspends and resumes or a signal arrives?

That is basically what I originally wrote.

> > The idea is that receipt of a new IPC request will cause a signal to be
> > sent, interrupting the sleep or the "qos" write.
> 
> Alternatively (to ipc), we could have a driver provide the same
> suspendblock style interface to userspace and maps it to qos
> constraints.  If it's something we maintain out-of-tree, no worries.
> The kernel side api is the bit that's the headache to maintain
> multiple versions of drivers with and without, after all.

Yep.  The idea is that all this userspace-oriented part should be
invisible to the vanilla kernel.

> I'm sure Arve will weigh in on this later, but from what I can see it
> certainly seems like this model provides us with the functionality
> we're looking for, provided the issue with
> spinning-while-waiting-for-drivers-to-release-constraints is sorted
> out.

I'm more concerned about how the other kernel developers will react.

Alan Stern

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ