lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141029155109.GH2265@localhost>
Date:	Wed, 29 Oct 2014 16:51:09 +0100
From:	Johan Hovold <johan@...nel.org>
To:	Guenter Roeck <linux@...ck-us.net>
Cc:	Johan Hovold <johan@...nel.org>,
	Russell King - ARM Linux <linux@....linux.org.uk>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Felipe Balbi <balbi@...com>,
	Alessandro Zummo <a.zummo@...ertech.it>,
	Tony Lindgren <tony@...mide.com>,
	BenoƮt Cousson <bcousson@...libre.com>,
	Lokesh Vutla <lokeshvutla@...com>, nsekhar@...com,
	t-kristo@...com, j-keerthy@...com, linux-omap@...r.kernel.org,
	linux-arm-kernel@...ts.infradead.org, devicetree@...r.kernel.org,
	rtc-linux@...glegroups.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 00/20] rtc: omap: fixes and power-off feature

On Wed, Oct 29, 2014 at 08:25:02AM -0700, Guenter Roeck wrote:
> On Wed, Oct 29, 2014 at 02:22:44PM +0100, Johan Hovold wrote:
> > On Wed, Oct 29, 2014 at 01:10:20PM +0000, Russell King - ARM Linux wrote:
> > > On Wed, Oct 29, 2014 at 01:34:18PM +0100, Johan Hovold wrote:
> > > > On Tue, Oct 28, 2014 at 03:16:10PM +0000, Russell King - ARM Linux wrote:
> > > > > And how is that different from having a set of power-off handlers, and
> > > > > reporting when each individual one fails?  Don't you want to know if
> > > > > your primary high priority reboot handler fails, just as much as you
> > > > > want to know if your final last-resort power-off handler fails?
> > > > 
> > > > Good point. Failed power-off should probably be logged by the power-off
> > > > call chain implementation (which seems to makes notifier chains a bad
> > > > fit).
> > > > 
> > > > And what about any power-off latencies? Should this always be dealt with
> > > > in the power-off handler?
> > > > 
> > > > Again, if it's predictable and high, as in the OMAP RTC case, it should
> > > > go in the handler. But what if it's just normal bus latencies
> > > > (peripheral busses, i2c, or whatever people may come up with)?
> > > > 
> > > > Should there always be a short delay before calling the next handler?
> > > 
> > > If the handler has determined that it has failed, then why delay before
> > > trying the next handler?  At the point it has decided it has failed,
> > > surely that's after it has waited sufficient time to determine that
> > > failure?
> > 
> > The current handlers we have are not expecting any other handler to be
> > run after they return. My question was whether all these handlers should
> > get a short mdelay added to them (e.g. to compensate for bus latencies)
> 
> Some of them do add a delay.

Yes, but not all.

> > or if this could be done in the power-off handler (e.g. before printing
> > the error message).
> > 
> That might make sense, but it would have to be configurable, since the delay
> is platform specific and power-off handler does not know how long to wait
> (the longest delay I have seen is 10 seconds).

We've already concluded in this thread that such delays must be encoded
in the actual handler (even if it's an argument to the power-off call
chain code). The only exception should be generic handlers such as
gpio-poweroff, which may need to define different delays depending on
board. This could of course just be an argument of the corresponding DT
node.

My question above was if it was reasonable to add some generic short
delay after calling each power-off handler to handle short power-off
latencies (e.g. bus latencies) so that not every handler needs an
explicit delay.

> > > > > Or different from having no power-off handlers.
> > > > 
> > > > That is actually quite different, as in that case we call machine_halt
> > > > instead (via kernel_halt).
> > > 
> > > Today, ARM does exactly what x86 does.  If there's no power off handler
> > > registered, machine_power_off() shuts down other CPUs and returns.
> > 
> > No, if there are no power-off handlers registered, kernel/reboot.c will
> > never call machine_power_off:
> > 
> > 	/* Instead of trying to make the power_off code look like
> > 	 * halt when pm_power_off is not set do it the easy way.
> > 	 */
> > 	if ((cmd == LINUX_REBOOT_CMD_POWER_OFF) && !pm_power_off)
> > 		cmd = LINUX_REBOOT_CMD_HALT;
> > 
> > So in that case on arm, a system-halted message is printed, and we never
> > return to user-space.
> > 
> Some architectures do that, or go into an endless loop. Others do return
> from machine_power_off.

Please re-read my comment and the code above. machine_power_off is never
called if there's no handler registered.

Some archs machine_power_off to spin on failed power-off (i.e. when
there is a handler), something which I've mentioned a few times already
in this thread.

> Having a well defined behavior would be nice
> (such as dumping an error mesasge and calling machine_halt if
> machine_power_off returns). Only question would be where to put it.
> kernel_power_off() might be a good place; only problem is that there
> are direct callers of machine_power_off().

Indeed. Adding an error message to the power-off handler call chain code
would solve the first problem as I mentioned before.

Then it's mostly a matter of whether we care about consistency, and
either remove the indefinite spins from those non-x86/non-arm arches, or
prevent the latter (and some others) from returning to user-space.

I'm inclined to having all arches return to user space on failed
power-off, even if it means systemd cannot call reboot() from PID 1.

Johan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ