lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <04EAB7311EE43145B2D3536183D1A8445491DB5E@GSjpTKYDCembx31.service.hitachi.net>
Date:	Wed, 29 Jul 2015 09:09:18 +0000
From:	河合英宏 / KAWAI,HIDEHIRO 
	<hidehiro.kawai.ez@...achi.com>
To:	"'Michal Hocko'" <mhocko@...nel.org>
CC:	Jonathan Corbet <corbet@....net>,
	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...nel.org>,
	"Eric W. Biederman" <ebiederm@...ssion.com>,
	"H. Peter Anvin" <hpa@...or.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Vivek Goyal <vgoyal@...hat.com>,
	"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
	"x86@...nel.org" <x86@...nel.org>,
	"kexec@...ts.infradead.org" <kexec@...ts.infradead.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Ingo Molnar <mingo@...hat.com>,
	平松雅巳 / HIRAMATU,MASAMI 
	<masami.hiramatsu.pt@...achi.com>
Subject: RE: Re: [V2 PATCH 1/3] x86/panic: Fix re-entrance problem due to
 panic on NMI

> From: Michal Hocko [mailto:mhocko@...nel.org]
> On Wed 29-07-15 05:48:47, 河合英宏 / KAWAI,HIDEHIRO wrote:
> > Hi,
> >
> > > From: linux-kernel-owner@...r.kernel.org [mailto:linux-kernel-owner@...r.kernel.org] On Behalf Of Hidehiro Kawai
> > > (2015/07/27 23:34), Michal Hocko wrote:
> > > > On Mon 27-07-15 10:58:50, Hidehiro Kawai wrote:
> > [...]
> > > > The check could be also relaxed a bit and nmi_panic would
> > > > return only if the ongoing panic is the current cpu when we really have
> > > > to return and allow the preempted panic to finish.
> > >
> > > It's reasonable.  I'll do that in the next version.
> >
> > I noticed atomic_read() is insufficient.  Please consider the following
> > scenario.
> >
> > CPU 1: call panic() in the normal context
> > CPU 0: call nmi_panic(), check the value of panic_cpu, then call panic()
> > CPU 1: set 1 to panic_cpu
> > CPU 0: fail to set 0 to panic_cpu, then do an infinite loop
> > CPU 1: call crash_kexec(), then call kdump_nmi_shootdown_cpus()
> >
> > At this point, since CPU 0 loops in NMI context, it never executes
> > the NMI handler registered by kdump_nmi_shootdown_cpus().  This means
> > that no register states are saved and no cleanups for VMX/SVM are
> > performed.
> 
> Yes this is true but it is no different from the current state, isn't
> it? So if you want to handle that then it deserves a separate patch.
> It is certainly not harmful wrt. panic behavior.
> 
> > So, we should still use atomic_cmpxchg() in nmi_panic() to
> > prevent other cpus from running panic routines.
> 
> Not sure what you mean by that.

I mean that we should use the same logic as my V2 patch like this:

#define nmi_panic(fmt, ...)                                            \
       do {                                                            \
               if (atomic_cmpxchg(&panic_cpu, -1, raw_smp_processor_id()) \
                   == -1)                                              \
                       panic(fmt, ##__VA_ARGS__);                      \
       } while (0)

By using atomic_cmpxchg here, we can ensure that only this cpu
runs panic routines.  It is important to prevent a NMI-context cpu
from calling panic_smp_self_stop(). 

void panic(const char *fmt, ...)
{
...
        * `old_cpu == -1' means we are the first comer.
        * `old_cpu == this_cpu' means we came here due to panic on NMI.
        */
       this_cpu = raw_smp_processor_id();
       old_cpu = atomic_cmpxchg(&panic_cpu, -1, this_cpu);
       if (old_cpu != -1 && old_cpu != this_cpu)
                panic_smp_self_stop();

Please assume that CPU 0 calls nmi_panic() in NMI context
and CPU 1 calls panic() in normal context at tha same time.

If CPU 1 set panic_cpu before CPU 0 does, CPU 1 runs panic routines
and CPU 0 return from the nmi handler.  Eventually CPU 0 is stopped
by nmi_shootdown_cpus().

If CPU 0 set panic_cpu before CPU 1 does, CPU 0 runs panic routines.
CPU 1 calls panic_smp_self_stop(), and wait for NMI by
nmi_shootdown_cpus().

Anyway, I tested my approach and it worked fine.

Regards,
Kawai

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ