lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120203171809.GL5650@redhat.com>
Date:	Fri, 3 Feb 2012 12:18:09 -0500
From:	Don Zickus <dzickus@...hat.com>
To:	"Luck, Tony" <tony.luck@...el.com>
Cc:	Seiji Aguchi <seiji.aguchi@....com>,
	Chen Gong <gong.chen@...ux.intel.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Matthew Garrett <mjg@...hat.com>,
	Vivek Goyal <vgoyal@...hat.com>,
	"Chen, Gong" <gong.chen@...el.com>,
	"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
	"Brown, Len" <len.brown@...el.com>,
	"'ying.huang@...el.com'" <'ying.huang@...el.com'>,
	"'ak@...ux.intel.com'" <'ak@...ux.intel.com'>,
	"'hughd@...omium.org'" <'hughd@...omium.org'>,
	"'mingo@...e.hu'" <'mingo@...e.hu'>,
	"jmorris@...ei.org" <jmorris@...ei.org>,
	"a.p.zijlstra@...llo.nl" <a.p.zijlstra@...llo.nl>,
	"namhyung@...il.com" <namhyung@...il.com>,
	"dle-develop@...ts.sourceforge.net" 
	<dle-develop@...ts.sourceforge.net>,
	Satoru Moriya <satoru.moriya@....com>
Subject: Re: [RFC][PATCH v4 -next 1/4] Move kmsg_dump(KMSG_DUMP_PANIC) below
 smp_send_stop()

On Fri, Jan 20, 2012 at 05:56:15PM +0000, Luck, Tony wrote:
> > Do you have any comments?
> 
> I'm stuck in because I don't know how assign probabilities to
> the failure cases with kmsg_dump() before and after the smp_send_stop().
> 
> There's a well documented tendency in humans to stick with the status
> quo in such situations.  I'm definitely finding it hard to provide
> a positive recommendation (ACK).
> 
> So I'll just talk out loud here for a bit in case someone sees
> something obviously flawed in my understanding.

Thanks for trying to think this through.

> 
> Problem statement: We'd like to maximize our chances of saving the
> tail of the kernel log when the system goes down. With the current
> ordering there is a concern that other cpus will interfere with the
> one that is saving the log.
> 
> Problems in current code flow:
> *) Other cpus might hold locks that we need. Our options are to fail,
>    or to "bust" the locks (but busting the locks may lead to other
>    problems in the code path - those locks were there for a reason).
>    There are only a couple of ways that this could be an issue.
>    1) The lock is held because someone is doing some other pstore
>    filesystem operation (reading and erasing records). This has a
>    very low probability. Normal code flow will have some process harvest
>    records from pstore in some /etc/rc.d/* script - this process should
>    take much less than a second.

ok.

>    2) The lock is held because some other kmsg_dump() store is in progress.
>    This one seems more worrying - think of an OOPS (or several) right
>    before we panic

I think Seiji had another patch to address this.

> 
> Problems in proposed code flow:
> *) smp_send_stop() fails:
>    1) doesn't actually stop other cpus (we are no worse off than before we
>    made this change)
>    2) doesn't return - so we don't even try to dump to pstore back end. x86
>    code has recently been hardened (though I can still imagine a pathological
>    case where in a crash the cpu calling this is uncertain of its own
>    identity, and somehow manages to stop itself - perhaps we are so screwed up
>    in this case that we have no hope anyway)

Where in the code do you see that it might not return?  We can also
conceive of a scenario such that pstore or apei code has a bug and oops
the box a second time and we are no better off.  That code has a lot more
churn then the shutdown code, I believe.

> *) Even if it succeeds - we may still run into problems busting locks because
>    even though the cpu that held them isn't executing, the data structures
>    or device registers protected by the lock may be in an inconsistent state.
> *) If we had just let this other cpus keep running, they'd have finished their
>    operation and freed up the problem lock anyway

So this is an interesting concern and it would be nice to have that extra
second to finish things off before breaking the spin lock.  I was trying
to figure out if there was a way to do that.

On x86 I think we can (and maybe others).  I had a thought, most
spinlocks are taken by disabling interrupts (ie spin_lock_irqsave).  By
disabling interrupts you block IPIs.  Originally the shutdown code would
use the REBOOT_IPI to stop other cpus but would fail if some piece of code
on another cpu was spinning forever with irqs disabled.  I modified it to
use NMIs to be more robust.

What if we send the REBOOT_IPI first and let it block for up to a second.
Most code paths that are done with spin_locks will use
spin_lock_irqrestore.  As soon as the interrupts are re-enabled the
REBOOT_IPI comes in and takes the processor.  If after a second the cpu
still is blocking interrupts, just use the NMI as a big hammer to shut it
down.

This would allow the pstore stuff to block shutting things down for a
little bit to finish writing its data out before accepting the IPI (at
least for a second).  Otherwise if it takes more than a second and the NMI
has to come in, we may have to investigate what is going on.

Would that help win you over?

I know that is x86 specific, but other arches might be able to adapt a
similar approach?

Cheers,
Don
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ