lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100716191304.GB8309@Krystal>
Date:	Fri, 16 Jul 2010 15:13:04 -0400
From:	Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	LKML <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Ingo Molnar <mingo@...e.hu>,
	Peter Zijlstra <peterz@...radead.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	Steven Rostedt <rostedt@...tedt.homelinux.com>,
	Frederic Weisbecker <fweisbec@...il.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Christoph Hellwig <hch@....de>, Li Zefan <lizf@...fujitsu.com>,
	Lai Jiangshan <laijs@...fujitsu.com>,
	Johannes Berg <johannes.berg@...el.com>,
	Masami Hiramatsu <masami.hiramatsu.pt@...achi.com>,
	Arnaldo Carvalho de Melo <acme@...radead.org>,
	Tom Zanussi <tzanussi@...il.com>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	Andi Kleen <andi@...stfloor.org>,
	"H. Peter Anvin" <hpa@...or.com>,
	Jeremy Fitzhardinge <jeremy@...p.org>,
	"Frank Ch. Eigler" <fche@...hat.com>, Tejun Heo <htejun@...il.com>
Subject: Re: [patch 1/2] x86_64 page fault NMI-safe

Hi Linus,

What I omitted in my original description paragraph is that I also test for NMIs
nested over NMI "regular code" with a "nesting" per-cpu flag, which deals with
the concerns you expressed in your reply about function calls and traps.

I'm self-replying to keep track of Avi's comment about the need to save/restore
cr2 at the beginning/end of the NMI handler, so we don't end up corrupting a VM
CR2 if we have the following scenario: trap in VM, NMI, trap in NMI. So I added
cr2 awareness to the code snippet below, so we should be close to have something
that starts to make sense. (although I'm not saying it's bug-free yet) ;)

Please note that I'll be off on vacation for 2 weeks starting this evening (back
on August 2) without Internet access, so my answers might be delayed.

Thanks !

Mathieu


Code originally written by Linus Torvalds, modified by Mathieu Desnoyers
intenting to handle the fake NMI entry gracefully given that NMIs are not
necessarily disabled at the entry point. It uses a "need fake NMI" flag rather
than playing games with CS and faults. When a fake NMI is needed, it simply
jumps back to the beginning of regular nmi code. NMI exit code and fake NMI
entry are made reentrant with respect to NMI handler interruption by testing, at
the very beginning of the NMI handler, if a NMI is nested over the whole
nmi_atomic ..  nmi_atomic_end code region. It also tests for nested NMIs by
keeping a per-cpu "nmi nested" flag"; this deals with detection of nesting over
the "regular nmi" execution. This code assumes NMIs have a separate stack.

#
# Two per-cpu variables: a "are we nested" flag (one byte).
# a "do we need to execute a fake NMI" flag (one byte).
# The %rsp at which the stack copy is saved is at a fixed address, which leaves
# enough room at the bottom of NMI stack for the "real" NMI entry stack. This
# assumes we have a separate NMI stack.
# The NMI stack copy top of stack is at nmi_stack_copy.
# The NMI stack copy "rip" is at nmi_stack_copy_rip, which is set to
# nmi_stack_copy-40.
#
nmi:
	# Test if nested over atomic code.
	cmpq $nmi_atomic,0(%rsp)
	jae nmi_addr_is_ae
	# Test if nested over general NMI code.
	cmpb $0,%__percpu_seg:nmi_stack_nesting
	jne nmi_nested_set_fake_and_return
	# create new stack
is_unnested_nmi:
	# Save some space for nested NMI's. The exception itself
	# will never use more space, but it might use less (since
	# if will be a kernel-kernel transition).

	# Save %rax on top of the stack (need to temporarily use it)
	pushq %rax
	movq %rsp, %rax
	movq $nmi_stack_copy,%rsp

	# copy the five words of stack info. rip starts at 8+0(%rax).
	# cr2 is saved at nmi_stack_copy_rip+40
	pushq %cr2          # save cr2 to handle nesting over page faults
	pushq 8+32(%rax)    # ss
	pushq 8+24(%rax)    # rsp
	pushq 8+16(%rax)    # eflags
	pushq 8+8(%rax)     # cs
	pushq 8+0(%rax)     # rip
	movq 0(%rax),%rax   # restore %rax

set_nmi_nesting:
	# and set the nesting flags
	movb $0xff,%__percpu_seg:nmi_stack_nesting

regular_nmi_code:
	...
	# regular NMI code goes here, and can take faults,
	# because this sequence now has proper nested-nmi
	# handling
	...

nmi_atomic:
	# An NMI nesting over the whole nmi_atomic .. nmi_atomic_end region will
	# be handled specially. This includes the fake NMI entry point.
	cmpb $0,%__percpu_seg:need_fake_nmi
	jne fake_nmi
	movb $0,%__percpu_seg:nmi_stack_nesting
	# restore cr2
	movq %nmi_stack_copy_rip+40,%cr2
	iret

	# This is the fake NMI entry point.
fake_nmi:
	movb $0x0,%__percpu_seg:need_fake_nmi
	jmp regular_nmi_code
nmi_atomic_end:

	# Make sure the address is in the nmi_atomic range and in CS segment.
nmi_addr_is_ae:
	cmpq $nmi_atomic_end,0(%rsp)
	jae is_unnested_nmi
	# The saved rip points to the final NMI iret. Check the CS segment to
	# make sure.
	cmpw $__KERNEL_CS,8(%rsp)
	jne is_unnested_nmi

# This is the case when we hit just as we're supposed to do the atomic code
# of a previous nmi.  We run the NMI using the old return address that is still
# on the stack, rather than copy the new one that is bogus and points to where
# the nested NMI interrupted the original NMI handler!
# Easy: just set the stack pointer to point to the stack copy, clear
# need_fake_nmi (because we are directly going to execute the requested NMI) and
# jump to "nesting flag set" (which is followed by regular nmi code execution).
	movq $nmi_stack_copy_rip,%rsp
	movb $0x0,%__percpu_seg:need_fake_nmi
	jmp set_nmi_nesting

# This is the actual nested case. Make sure we branch to the fake NMI handler
# after this handler is done.
nmi_nested_set_fake_and_return:
	movb $0xff,%__percpu_seg:need_fake_nmi
	popfq
	jmp *(%rsp)


-- 
Mathieu Desnoyers
Operating System Efficiency R&D Consultant
EfficiOS Inc.
http://www.efficios.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ