lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141029132244.GD2265@localhost>
Date:	Wed, 29 Oct 2014 14:22:44 +0100
From:	Johan Hovold <johan@...nel.org>
To:	Russell King - ARM Linux <linux@....linux.org.uk>
Cc:	Johan Hovold <johan@...nel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Felipe Balbi <balbi@...com>,
	Alessandro Zummo <a.zummo@...ertech.it>,
	Tony Lindgren <tony@...mide.com>,
	BenoƮt Cousson <bcousson@...libre.com>,
	Lokesh Vutla <lokeshvutla@...com>,
	Guenter Roeck <linux@...ck-us.net>, nsekhar@...com,
	t-kristo@...com, j-keerthy@...com, linux-omap@...r.kernel.org,
	linux-arm-kernel@...ts.infradead.org, devicetree@...r.kernel.org,
	rtc-linux@...glegroups.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 00/20] rtc: omap: fixes and power-off feature

On Wed, Oct 29, 2014 at 01:10:20PM +0000, Russell King - ARM Linux wrote:
> On Wed, Oct 29, 2014 at 01:34:18PM +0100, Johan Hovold wrote:
> > On Tue, Oct 28, 2014 at 03:16:10PM +0000, Russell King - ARM Linux wrote:
> > > And how is that different from having a set of power-off handlers, and
> > > reporting when each individual one fails?  Don't you want to know if
> > > your primary high priority reboot handler fails, just as much as you
> > > want to know if your final last-resort power-off handler fails?
> > 
> > Good point. Failed power-off should probably be logged by the power-off
> > call chain implementation (which seems to makes notifier chains a bad
> > fit).
> > 
> > And what about any power-off latencies? Should this always be dealt with
> > in the power-off handler?
> > 
> > Again, if it's predictable and high, as in the OMAP RTC case, it should
> > go in the handler. But what if it's just normal bus latencies
> > (peripheral busses, i2c, or whatever people may come up with)?
> > 
> > Should there always be a short delay before calling the next handler?
> 
> If the handler has determined that it has failed, then why delay before
> trying the next handler?  At the point it has decided it has failed,
> surely that's after it has waited sufficient time to determine that
> failure?

The current handlers we have are not expecting any other handler to be
run after they return. My question was whether all these handlers should
get a short mdelay added to them (e.g. to compensate for bus latencies)
or if this could be done in the power-off handler (e.g. before printing
the error message).

> > > Or different from having no power-off handlers.
> > 
> > That is actually quite different, as in that case we call machine_halt
> > instead (via kernel_halt).
> 
> Today, ARM does exactly what x86 does.  If there's no power off handler
> registered, machine_power_off() shuts down other CPUs and returns.

No, if there are no power-off handlers registered, kernel/reboot.c will
never call machine_power_off:

	/* Instead of trying to make the power_off code look like
	 * halt when pm_power_off is not set do it the easy way.
	 */
	if ((cmd == LINUX_REBOOT_CMD_POWER_OFF) && !pm_power_off)
		cmd = LINUX_REBOOT_CMD_HALT;

So in that case on arm, a system-halted message is printed, and we never
return to user-space.

> > > Here's the x86 code:
> > > 
> > > void machine_power_off(void)
> > > {
> > >         machine_ops.power_off();
> > > }
> > > 
> > > struct machine_ops machine_ops = {
> > >         .power_off = native_machine_power_off,
> > > ...
> > > 
> > > static void native_machine_power_off(void)
> > > {
> > >         if (pm_power_off) {
> > >                 if (!reboot_force)
> > >                         machine_shutdown();
> > >                 pm_power_off();
> > >         }
> > >         /* A fallback in case there is no PM info available */
> > >         tboot_shutdown(TB_SHUTDOWN_HALT);
> > > }
> > > 
> > > void tboot_shutdown(u32 shutdown_type)
> > > {
> > >         void (*shutdown)(void);
> > > 
> > >         if (!tboot_enabled())
> > >                 return;
> > > 
> > > See - x86 can very well just fall straight back out of machine_power_off()
> > > if there's no pm_power_off() hook and tboot is not enabled.
> > 
> > I never doubted that, but is the right thing to do? Not all arches do it
> > that way.
> 
> Well, the biggest question there is: if the power off or restart syscall
> fails, what is the _generic_ non-architecture action which is supposed to
> happen?
> 
> Whatever the answer is to that question, that action should be performed
> by the _generic_ non-architecture code, rather than having the same
> implementation spread across all 30 architectures which the kernel
> supports today.

I fully agree.

> > And what about the killing of init? Shall we simply consider that a
> > systemd bug? 
> > 
> > 	case LINUX_REBOOT_CMD_POWER_OFF:
> > 		kernel_power_off();
> > 		do_exit(0);
> > 		break;
> > 
> > If power-off fails (for whatever reason), do_exit(0) will trigger a
> > panic when called from PID 1.
> 
> Oh, systemd calls this from PID1?  I guess that's another reason to hate
> systemd with vitriol.  :)  SysVinit and upstart implementations call it
> from the "halt" command, which is itself normally run from a script,
> which totally avoids that problem.

Yeah, that's why I never noticed the missing mdelay.

> I'm quite sure the insane systemd lobby will scream that this is a kernel
> bug and will want to change it somehow, just like they want to change the
> kernel in soo many other silly ways.

Will be interesting to follow. :)

Johan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ