lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 27 Nov 2007 14:42:20 -0500
From:	Neil Horman <nhorman@...hat.com>
To:	Ben Woodard <woodard@...hat.com>
Cc:	"Eric W. Biederman" <ebiederm@...ssion.com>,
	Andi Kleen <ak@...e.de>, Neil Horman <nhorman@...hat.com>,
	kexec@...ts.infradead.org, linux-kernel@...r.kernel.org,
	hbabu@...ibm.com, vgoyal@...ibm.com
Subject: Re: [PATCH] kexec: force x86_64 arches to boot kdump kernels on boot cpu

On Tue, Nov 27, 2007 at 10:41:15AM -0800, Ben Woodard wrote:
> >>But if CPU #0 has interrupts disabled no interrupts get delivered.
> >>
> >>So choices are:
> >>- Move to CPU #0
> >>- Do not use legacy mode during shutdown.
> >    (Do not use legacy mode in the kdump kernel. removing it from shutdown
> >     is just minor optimization)
> >>- Or do not rely on interrupts after enabling legacy mode
> >>- Or do not disable interrupts on the other CPUs when they're
> >>halted.
> >>
> >>First and last option are probably unreliable for the kdump case.
> >>Second or third sound best. 
> >>
> 
> I can agree with the fourth option being a very bad one but I really 
> haven't seen anything in this discussion which supports the assertion 
> that "Move to the CPU that the BIOS originally called CPU#0" is going to 
> be unreliable. Admittedly we haven't tried this on every single x86_64 
> platform that we have but on the handful that we have tried so far, it 
> hasn't been a problem. Why is everybody jumping to the assumption that 
> it will be less reliable?
> 

Ben I tend to agree.  I think re-enabling the APIC early in the boot process
provides a greater degree of reliability in that it more quickly restores the
system to a state where booting on a cpu other than cpu0 will be more likely to
work, but I have to say that overall it seems like booting a secondary kernel on
cpu0, when possible offers the highest degree of reliability.

Perhaps what we need is a 'both solution'.  Re-enabling the apic to full smp
functionality early in the boot process is a good solution for the problems
which we are hypothesizing here, and would be a good thing to do in general, but
it doesn't preclude also attmpting to switch back to cpu0 during a crash.

Perhaps it would be worthwhile to:

1) Investigate the early enablement of the ioapic for x86[_64]
2) implement my prevoiusly proposed patch with the addition of a handshake
element, such that:
	a) when the boot cpu gets the ipi from machine_crash_shutdown it flags
	   the fact that it is going to boot the kexec kernel with a global
	   variable
	b) the crashing cpu loops waiting for either:
		I) a timeout of 1 second
		II) a reduction of the halt count to zero
		III) the setting of the flag mentioned in (a)
	c) the crashing cpu, if it sees that it is not the boot cpu AND
	   that the flag in (III) is set, will halt itself, otherwise it
	   will set the flag and boot the kexec image itself.

With this modification, we can try to relocate to cpu0,  and if we fail, we fall
back to booting on the crashing processor.

I'll work up a patch that implements (2), unless there are strong objections.  I
see no reason why we can't implment this 'both' solution.

Regards
Neil

-- 
/***************************************************
 *Neil Horman
 *Software Engineer
 *Red Hat, Inc.
 *nhorman@...hat.com
 *gpg keyid: 1024D / 0x92A74FA1
 *http://pgp.mit.edu
 ***************************************************/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ