lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 20 Jul 2018 13:37:27 -0700
From:   tip-bot for Joerg Roedel <tipbot@...or.com>
To:     linux-tip-commits@...r.kernel.org
Cc:     tglx@...utronix.de, jpoimboe@...hat.com, David.Laight@...lab.com,
        dvlasenk@...hat.com, alexander.shishkin@...ux.intel.com,
        acme@...nel.org, jolsa@...hat.com, linux-kernel@...r.kernel.org,
        gregkh@...uxfoundation.org, jgross@...e.com, mingo@...nel.org,
        brgerst@...il.com, bp@...en8.de, dave.hansen@...el.com,
        dhgutteridge@...patico.ca, luto@...nel.org, hpa@...or.com,
        jroedel@...e.de, peterz@...radead.org, pavel@....cz,
        llong@...hat.com, jkosina@...e.cz, eduval@...zon.com,
        aarcange@...hat.com, will.deacon@....com, namhyung@...nel.org,
        torvalds@...ux-foundation.org, boris.ostrovsky@...cle.com
Subject: [tip:x86/pti] x86/entry/32: Check for VM86 mode in slow-path check

Commit-ID:  d5e84c21dbf5ea458897f88346dc979909eed913
Gitweb:     https://git.kernel.org/tip/d5e84c21dbf5ea458897f88346dc979909eed913
Author:     Joerg Roedel <jroedel@...e.de>
AuthorDate: Fri, 20 Jul 2018 18:22:23 +0200
Committer:  Thomas Gleixner <tglx@...utronix.de>
CommitDate: Fri, 20 Jul 2018 22:33:41 +0200

x86/entry/32: Check for VM86 mode in slow-path check

The SWITCH_TO_KERNEL_STACK macro only checks for CPL == 0 to go down the
slow and paranoid entry path. The problem is that this check also returns
true when coming from VM86 mode. This is not a problem by itself, as the
paranoid path handles VM86 stack-frames just fine, but it is not necessary
as the normal code path handles VM86 mode as well (and faster).

Extend the check to include VM86 mode. This also makes an optimization of
the paranoid path possible.

Signed-off-by: Joerg Roedel <jroedel@...e.de>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Cc: "H . Peter Anvin" <hpa@...or.com>
Cc: linux-mm@...ck.org
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Andy Lutomirski <luto@...nel.org>
Cc: Dave Hansen <dave.hansen@...el.com>
Cc: Josh Poimboeuf <jpoimboe@...hat.com>
Cc: Juergen Gross <jgross@...e.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Borislav Petkov <bp@...en8.de>
Cc: Jiri Kosina <jkosina@...e.cz>
Cc: Boris Ostrovsky <boris.ostrovsky@...cle.com>
Cc: Brian Gerst <brgerst@...il.com>
Cc: David Laight <David.Laight@...lab.com>
Cc: Denys Vlasenko <dvlasenk@...hat.com>
Cc: Eduardo Valentin <eduval@...zon.com>
Cc: Greg KH <gregkh@...uxfoundation.org>
Cc: Will Deacon <will.deacon@....com>
Cc: aliguori@...zon.com
Cc: daniel.gruss@...k.tugraz.at
Cc: hughd@...gle.com
Cc: keescook@...gle.com
Cc: Andrea Arcangeli <aarcange@...hat.com>
Cc: Waiman Long <llong@...hat.com>
Cc: Pavel Machek <pavel@....cz>
Cc: "David H . Gutteridge" <dhgutteridge@...patico.ca>
Cc: Arnaldo Carvalho de Melo <acme@...nel.org>
Cc: Alexander Shishkin <alexander.shishkin@...ux.intel.com>
Cc: Jiri Olsa <jolsa@...hat.com>
Cc: Namhyung Kim <namhyung@...nel.org>
Cc: joro@...tes.org
Link: https://lkml.kernel.org/r/1532103744-31902-3-git-send-email-joro@8bytes.org

---
 arch/x86/entry/entry_32.S | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
index 010cdb41e3c7..2767c625a52c 100644
--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -414,8 +414,16 @@
 	andl	$(0x0000ffff), PT_CS(%esp)
 
 	/* Special case - entry from kernel mode via entry stack */
-	testl	$SEGMENT_RPL_MASK, PT_CS(%esp)
-	jz	.Lentry_from_kernel_\@
+#ifdef CONFIG_VM86
+	movl	PT_EFLAGS(%esp), %ecx		# mix EFLAGS and CS
+	movb	PT_CS(%esp), %cl
+	andl	$(X86_EFLAGS_VM | SEGMENT_RPL_MASK), %ecx
+#else
+	movl	PT_CS(%esp), %ecx
+	andl	$SEGMENT_RPL_MASK, %ecx
+#endif
+	cmpl	$USER_RPL, %ecx
+	jb	.Lentry_from_kernel_\@
 
 	/* Bytes to copy */
 	movl	$PTREGS_SIZE, %ecx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ