[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALCETrUaHysYacCF1t_Sap0jHhqBUb7dUKjaVDtPyM-kUMR3sw@mail.gmail.com>
Date: Fri, 26 Jan 2018 11:02:08 -0800
From: Andy Lutomirski <luto@...nel.org>
To: "Kirill A. Shutemov" <kirill@...temov.name>
Cc: Andy Lutomirski <luto@...nel.org>,
Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
Dave Hansen <dave.hansen@...el.com>, X86 ML <x86@...nel.org>,
Borislav Petkov <bp@...en8.de>,
Neil Berrington <neil.berrington@...acore.com>,
LKML <linux-kernel@...r.kernel.org>,
stable <stable@...r.kernel.org>
Subject: Re: [PATCH v2 1/2] x86/mm/64: Fix vmapped stack syncing on
very-large-memory 4-level systems
On Fri, Jan 26, 2018 at 10:51 AM, Kirill A. Shutemov
<kirill@...temov.name> wrote:
> On Thu, Jan 25, 2018 at 01:12:14PM -0800, Andy Lutomirski wrote:
>> Neil Berrington reported a double-fault on a VM with 768GB of RAM that
>> uses large amounts of vmalloc space with PTI enabled.
>>
>> The cause is that load_new_mm_cr3() was never fixed to take the
>> 5-level pgd folding code into account, so, on a 4-level kernel, the
>> pgd synchronization logic compiles away to exactly nothing.
>
> Ouch. Sorry for this.
>
>>
>> Interestingly, the problem doesn't trigger with nopti. I assume this
>> is because the kernel is mapped with global pages if we boot with
>> nopti. The sequence of operations when we create a new task is that
>> we first load its mm while still running on the old stack (which
>> crashes if the old stack is unmapped in the new mm unless the TLB
>> saves us), then we call prepare_switch_to(), and then we switch to the
>> new stack. prepare_switch_to() pokes the new stack directly, which
>> will populate the mapping through vmalloc_fault(). I assume that
>> we're getting lucky on non-PTI systems -- the old stack's TLB entry
>> stays alive long enough to make it all the way through
>> prepare_switch_to() and switch_to() so that we make it to a valid
>> stack.
>>
>> Fixes: b50858ce3e2a ("x86/mm/vmalloc: Add 5-level paging support")
>> Cc: stable@...r.kernel.org
>> Reported-and-tested-by: Neil Berrington <neil.berrington@...acore.com>
>> Signed-off-by: Andy Lutomirski <luto@...nel.org>
>> ---
>> arch/x86/mm/tlb.c | 34 +++++++++++++++++++++++++++++-----
>> 1 file changed, 29 insertions(+), 5 deletions(-)
>>
>> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
>> index a1561957dccb..5bfe61a5e8e3 100644
>> --- a/arch/x86/mm/tlb.c
>> +++ b/arch/x86/mm/tlb.c
>> @@ -151,6 +151,34 @@ void switch_mm(struct mm_struct *prev, struct mm_struct *next,
>> local_irq_restore(flags);
>> }
>>
>> +static void sync_current_stack_to_mm(struct mm_struct *mm)
>> +{
>> + unsigned long sp = current_stack_pointer;
>> + pgd_t *pgd = pgd_offset(mm, sp);
>> +
>> + if (CONFIG_PGTABLE_LEVELS > 4) {
>
> Can we have
>
> if (PTRS_PER_P4D > 1)
>
> here instead? This way I wouldn't need to touch the code again for
> boot-time switching support.
Want to send a patch?
(Also, I haven't noticed a patch to fix up the SYSRET checking for
boot-time switching. Have I just missed it?)
--Andy
Powered by blists - more mailing lists