lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 14 Jul 2010 14:23:06 -0700
From:	Linus Torvalds <torvalds@...ux-foundation.org>
To:	Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Cc:	LKML <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Ingo Molnar <mingo@...e.hu>,
	Peter Zijlstra <peterz@...radead.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	Steven Rostedt <rostedt@...tedt.homelinux.com>,
	Frederic Weisbecker <fweisbec@...il.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Christoph Hellwig <hch@....de>, Li Zefan <lizf@...fujitsu.com>,
	Lai Jiangshan <laijs@...fujitsu.com>,
	Johannes Berg <johannes.berg@...el.com>,
	Masami Hiramatsu <masami.hiramatsu.pt@...achi.com>,
	Arnaldo Carvalho de Melo <acme@...radead.org>,
	Tom Zanussi <tzanussi@...il.com>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	Andi Kleen <andi@...stfloor.org>,
	"H. Peter Anvin" <hpa@...or.com>,
	Jeremy Fitzhardinge <jeremy@...p.org>,
	"Frank Ch. Eigler" <fche@...hat.com>, Tejun Heo <htejun@...il.com>
Subject: Re: [patch 1/2] x86_64 page fault NMI-safe

On Wed, Jul 14, 2010 at 1:39 PM, Mathieu Desnoyers
<mathieu.desnoyers@...icios.com> wrote:
>
>>  - load percpu NMI stack frame pointer
>>  - if non-zero we know we're nested, and should ignore this NMI:
>>     - we're returning to kernel mode, so return immediately by using
>> "popf/ret", which also keeps NMI's disabled in the hardware until the
>> "real" NMI iret happens.
>
> Maybe incrementing a per-cpu missed NMIs count could be appropriate here so we
> know how many NMIs should be replayed at iret ?

No. As mentioned, there is no such counter in real hardware either.

Look at what happens for the not-nested case:

 - NMI1 triggers. The CPU takes a fault, and runs the NMI handler with
NMI's disabled

 - NMI2 triggers. Nothing happens, the NMI's are disabled.

 - NMI3 triggers. Again, nothing happens, the NMI's are still disabled

 - the NMI handler returns.

 - What happens now?

How many NMI interrupts do you get? ONE. Exactly like my "emulate it
in software" approach. The hardware doesn't have any counters for
pending NMI's either. Why should the software emulation have them?

>>     - before the popf/iret, use the NMI stack pointer to make the NMI
>> return stack be invalid and cause a fault
>
> I assume you mean "popf/ret" here.

Yes, that was as typo. The whole point of using popf was obviously to
_avoid_ the iret ;)

> So assuming we use a frame copy, we should
> change the nmi stack pointer in the nesting 0 nmi stack copy, so the nesting 0
> NMI iret will trigger the fault
>
>>   - set the NMI stack pointer to the current stack pointer
>
> That would mean bringing back the NMI stack pointer to the (nesting - 1) nmi
> stack copy.

I think you're confused. Or I am by your question.

The NMI code would literally just do:

 - check if the NMI was nested, by looking at whether the percpu
nmi-stack-pointer is non-NULL

 - if it was nested, do nothing, an return with a popf/ret. The only
stack this sequence might needs is to save/restore the register that
we use for the percpu value (although maybe we can just co a "cmpl
$0,%_percpu_seg:nmi_stack_ptr" and not even need that), and it's
atomic because at this point we know that NMI's are disabled (we've
not _yet_ taken any nested faults)

 - if it's a regular (non-nesting) NMI, we'd basically do

     6* pushq 48(%rsp)

   to copy the five words that the NMI pushed (ss/esp/eflags/cs/eip)
and the one we saved ourselves (if we needed any, maybe we can make do
with just 5 words).

 - then we just save that new stack pointer to the percpu thing with a simple

     movq %rsp,%__percpu_seg:nmi_stack_ptr

and we're all done. The final "iret" will do the right thing (either
fault or return), and there are no races that I can see exactly
because we use a single nmi-atomic instruction (the "iret" itself) to
either re-enable NMI's _or_ test whether we should re-do an NMI.

There is a single-instruction window that is interestign in the return
path, which is the window between the two final instructions:

    movl $0,%__percpu_seg:nmi_stack_ptr
    iret

where I wonder what happens if we have re-enabled NMI (due to a fault
in the NMI handler), but we haven't actually taken the NMI itself yet,
so now we _will_ re-use the stack. Hmm. I suspect we need another of
those horrible "if the NMI happens at this particular %rip" cases that
we already have for the sysenter code on x86-32 for the NMI/DEBUG trap
case of fixing up the stack pointer.

And maybe I missed something else. But it does look reasonably simple.
Subtle, but not a lot of code. And the code is all very much about the
NMI itself, not about other random sequences. No?

                Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ