[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ccde7771-0bfc-579f-5682-ea7bb130205a@redhat.com>
Date: Wed, 5 Apr 2017 13:50:27 +0200
From: Denys Vlasenko <dvlasenk@...hat.com>
To: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Cc: brgerst@...il.com, torvalds@...ux-foundation.org,
akpm@...ux-foundation.org, luto@...capital.net,
linux-kernel@...r.kernel.org, jpoimboe@...hat.com, luto@...nel.org,
peterz@...radead.org, bp@...en8.de, hpa@...or.com,
dave.hansen@...el.com, tglx@...utronix.de, mingo@...nel.org,
linux-tip-commits@...r.kernel.org
Subject: Re: [tip:x86/mm] x86/asm: Remove __VIRTUAL_MASK_SHIFT==47 assert
On 04/05/2017 01:12 PM, Kirill A. Shutemov wrote:
> On Tue, Apr 04, 2017 at 05:36:33PM +0200, Denys Vlasenko wrote:
>>> diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
>>> index 044d18e..f07b4ef 100644
>>> --- a/arch/x86/entry/entry_64.S
>>> +++ b/arch/x86/entry/entry_64.S
>>> @@ -265,12 +265,9 @@ return_from_SYSCALL_64:
>>> *
>>> * If width of "canonical tail" ever becomes variable, this will need
>>> * to be updated to remain correct on both old and new CPUs.
>>> + *
>>> + * Change top 16 bits to be the sign-extension of 47th bit
>>
>> The comment above stops being correct: it's not necessary 16 top bits
>> we sign-extend now. With larger __VIRTUAL_MASK_SHIFT for 5-level translation,
>> it will become 7 bits (if I do the math right).
>
> Does the patch below look okay to you?
>
>>> */
>>> - .ifne __VIRTUAL_MASK_SHIFT - 47
>>> - .error "virtual address width changed -- SYSRET checks need update"
>>> - .endif
>>> -
>>> - /* Change top 16 bits to be the sign-extension of 47th bit */
>>> shl $(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx
>>> sar $(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx
>>
>> The bigger problem here would be the future boot-time choice of 4/5-level
>> page tables: __VIRTUAL_MASK_SHIFT will need to depend on that choice,
>> but in this location it is preferable to not use any variables
>> (memory references).
>
> Yeah. Will see what I will be able to come up with. Not sure yet.
>
> -------------------8<----------------------
>
> From 2433cf4f8847bbc41cc2b02d6af4f191b3b5a0c5 Mon Sep 17 00:00:00 2001
> From: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
> Date: Wed, 5 Apr 2017 14:06:15 +0300
> Subject: [PATCH] x86/asm: Fix comment in return_from_SYSCALL_64
>
> On x86-64 __VIRTUAL_MASK_SHIFT depends on paging mode now.
>
> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
> ---
> arch/x86/entry/entry_64.S | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
> index 607d72c4a485..c70e064d9592 100644
> --- a/arch/x86/entry/entry_64.S
> +++ b/arch/x86/entry/entry_64.S
> @@ -266,7 +266,8 @@ return_from_SYSCALL_64:
> * If width of "canonical tail" ever becomes variable, this will need
> * to be updated to remain correct on both old and new CPUs.
> *
> - * Change top 16 bits to be the sign-extension of 47th bit
> + * Change top bits to match most significant valuable bit (47 or 56
> + * depending on paging mode) in the address.
Er.... "Change top bits ... ((47 or 56 [bits] depending on paging mode)"?
I know that's wrong and that's not what you meant to say,
but it can be read this way too. "47th" instead of "47"
would eliminate this reading, but you removed "th".
Spell it out to eliminate any chance of confusion:
Change top bits to match most significant bit (47th or 56th bit
depending on paging mode) in the address.
> */
> shl $(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx
> sar $(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx
>
Powered by blists - more mailing lists