lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <tip-9cd342705877526f387cfcb5df8a964ab5873deb@git.kernel.org>
Date:   Fri, 20 Jul 2018 12:37:37 -0700
From:   tip-bot for Joerg Roedel <tipbot@...or.com>
To:     linux-tip-commits@...r.kernel.org
Cc:     linux-kernel@...r.kernel.org, jpoimboe@...hat.com, bp@...en8.de,
        dhgutteridge@...patico.ca, luto@...nel.org, aarcange@...hat.com,
        torvalds@...ux-foundation.org, dvlasenk@...hat.com, pavel@....cz,
        brgerst@...il.com, jgross@...e.com, hpa@...or.com,
        mingo@...nel.org, jroedel@...e.de, gregkh@...uxfoundation.org,
        peterz@...radead.org, David.Laight@...lab.com, jolsa@...hat.com,
        boris.ostrovsky@...cle.com, acme@...nel.org, namhyung@...nel.org,
        will.deacon@....com, jkosina@...e.cz,
        alexander.shishkin@...ux.intel.com, eduval@...zon.com,
        tglx@...utronix.de, dave.hansen@...el.com, llong@...hat.com
Subject: [tip:x86/pti] x86/entry/32: Check for VM86 mode in slow-path check

Commit-ID:  9cd342705877526f387cfcb5df8a964ab5873deb
Gitweb:     https://git.kernel.org/tip/9cd342705877526f387cfcb5df8a964ab5873deb
Author:     Joerg Roedel <jroedel@...e.de>
AuthorDate: Fri, 20 Jul 2018 18:22:23 +0200
Committer:  Thomas Gleixner <tglx@...utronix.de>
CommitDate: Fri, 20 Jul 2018 21:32:08 +0200

x86/entry/32: Check for VM86 mode in slow-path check

The SWITCH_TO_KERNEL_STACK macro only checks for CPL == 0 to go down the
slow and paranoid entry path. The problem is that this check also returns
true when coming from VM86 mode. This is not a problem by itself, as the
paranoid path handles VM86 stack-frames just fine, but it is not necessary
as the normal code path handles VM86 mode as well (and faster).

Extend the check to include VM86 mode. This also makes an optimization of
the paranoid path possible.

Signed-off-by: Joerg Roedel <jroedel@...e.de>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Cc: "H . Peter Anvin" <hpa@...or.com>
Cc: linux-mm@...ck.org
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Andy Lutomirski <luto@...nel.org>
Cc: Dave Hansen <dave.hansen@...el.com>
Cc: Josh Poimboeuf <jpoimboe@...hat.com>
Cc: Juergen Gross <jgross@...e.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Borislav Petkov <bp@...en8.de>
Cc: Jiri Kosina <jkosina@...e.cz>
Cc: Boris Ostrovsky <boris.ostrovsky@...cle.com>
Cc: Brian Gerst <brgerst@...il.com>
Cc: David Laight <David.Laight@...lab.com>
Cc: Denys Vlasenko <dvlasenk@...hat.com>
Cc: Eduardo Valentin <eduval@...zon.com>
Cc: Greg KH <gregkh@...uxfoundation.org>
Cc: Will Deacon <will.deacon@....com>
Cc: aliguori@...zon.com
Cc: daniel.gruss@...k.tugraz.at
Cc: hughd@...gle.com
Cc: keescook@...gle.com
Cc: Andrea Arcangeli <aarcange@...hat.com>
Cc: Waiman Long <llong@...hat.com>
Cc: Pavel Machek <pavel@....cz>
Cc: "David H . Gutteridge" <dhgutteridge@...patico.ca>
Cc: Arnaldo Carvalho de Melo <acme@...nel.org>
Cc: Alexander Shishkin <alexander.shishkin@...ux.intel.com>
Cc: Jiri Olsa <jolsa@...hat.com>
Cc: Namhyung Kim <namhyung@...nel.org>
Cc: joro@...tes.org
Link: https://lkml.kernel.org/r/1532103744-31902-3-git-send-email-joro@8bytes.org

---
 arch/x86/entry/entry_32.S | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
index 010cdb41e3c7..2767c625a52c 100644
--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -414,8 +414,16 @@
 	andl	$(0x0000ffff), PT_CS(%esp)
 
 	/* Special case - entry from kernel mode via entry stack */
-	testl	$SEGMENT_RPL_MASK, PT_CS(%esp)
-	jz	.Lentry_from_kernel_\@
+#ifdef CONFIG_VM86
+	movl	PT_EFLAGS(%esp), %ecx		# mix EFLAGS and CS
+	movb	PT_CS(%esp), %cl
+	andl	$(X86_EFLAGS_VM | SEGMENT_RPL_MASK), %ecx
+#else
+	movl	PT_CS(%esp), %ecx
+	andl	$SEGMENT_RPL_MASK, %ecx
+#endif
+	cmpl	$USER_RPL, %ecx
+	jb	.Lentry_from_kernel_\@
 
 	/* Bytes to copy */
 	movl	$PTREGS_SIZE, %ecx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ