lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 5 Apr 2017 14:12:58 +0300
From:   "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
To:     Denys Vlasenko <dvlasenk@...hat.com>
Cc:     brgerst@...il.com, torvalds@...ux-foundation.org,
        akpm@...ux-foundation.org, luto@...capital.net,
        linux-kernel@...r.kernel.org, jpoimboe@...hat.com, luto@...nel.org,
        peterz@...radead.org, bp@...en8.de, hpa@...or.com,
        dave.hansen@...el.com, tglx@...utronix.de, mingo@...nel.org,
        linux-tip-commits@...r.kernel.org
Subject: Re: [tip:x86/mm] x86/asm: Remove __VIRTUAL_MASK_SHIFT==47 assert

On Tue, Apr 04, 2017 at 05:36:33PM +0200, Denys Vlasenko wrote:
> > diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
> > index 044d18e..f07b4ef 100644
> > --- a/arch/x86/entry/entry_64.S
> > +++ b/arch/x86/entry/entry_64.S
> > @@ -265,12 +265,9 @@ return_from_SYSCALL_64:
> >  	 *
> >  	 * If width of "canonical tail" ever becomes variable, this will need
> >  	 * to be updated to remain correct on both old and new CPUs.
> > +	 *
> > +	 * Change top 16 bits to be the sign-extension of 47th bit
> 
> The comment above stops being correct: it's not necessary 16 top bits
> we sign-extend now. With larger __VIRTUAL_MASK_SHIFT for 5-level translation,
> it will become 7 bits (if I do the math right).

Does the patch below look okay to you?
 
> >  	 */
> > -	.ifne __VIRTUAL_MASK_SHIFT - 47
> > -	.error "virtual address width changed -- SYSRET checks need update"
> > -	.endif
> > -
> > -	/* Change top 16 bits to be the sign-extension of 47th bit */
> >  	shl	$(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx
> >  	sar	$(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx
> 
> The bigger problem here would be the future boot-time choice of 4/5-level
> page tables: __VIRTUAL_MASK_SHIFT will need to depend on that choice,
> but in this location it is preferable to not use any variables
> (memory references).

Yeah. Will see what I will be able to come up with. Not sure yet.

-------------------8<----------------------

>From 2433cf4f8847bbc41cc2b02d6af4f191b3b5a0c5 Mon Sep 17 00:00:00 2001
From: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Date: Wed, 5 Apr 2017 14:06:15 +0300
Subject: [PATCH] x86/asm: Fix comment in return_from_SYSCALL_64

On x86-64 __VIRTUAL_MASK_SHIFT depends on paging mode now.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
---
 arch/x86/entry/entry_64.S | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 607d72c4a485..c70e064d9592 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -266,7 +266,8 @@ return_from_SYSCALL_64:
 	 * If width of "canonical tail" ever becomes variable, this will need
 	 * to be updated to remain correct on both old and new CPUs.
 	 *
-	 * Change top 16 bits to be the sign-extension of 47th bit
+	 * Change top bits to match most significant valuable bit (47 or 56
+	 * depending on paging mode) in the address.
 	 */
 	shl	$(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx
 	sar	$(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx
-- 
 Kirill A. Shutemov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ