lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180126185143.dx7emh7cq5pbrkxn@node.shutemov.name>
Date:   Fri, 26 Jan 2018 21:51:43 +0300
From:   "Kirill A. Shutemov" <kirill@...temov.name>
To:     Andy Lutomirski <luto@...nel.org>
Cc:     Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
        Dave Hansen <dave.hansen@...el.com>, X86 ML <x86@...nel.org>,
        Borislav Petkov <bp@...en8.de>,
        Neil Berrington <neil.berrington@...acore.com>,
        LKML <linux-kernel@...r.kernel.org>, stable@...r.kernel.org
Subject: Re: [PATCH v2 1/2] x86/mm/64: Fix vmapped stack syncing on
 very-large-memory 4-level systems

On Thu, Jan 25, 2018 at 01:12:14PM -0800, Andy Lutomirski wrote:
> Neil Berrington reported a double-fault on a VM with 768GB of RAM that
> uses large amounts of vmalloc space with PTI enabled.
> 
> The cause is that load_new_mm_cr3() was never fixed to take the
> 5-level pgd folding code into account, so, on a 4-level kernel, the
> pgd synchronization logic compiles away to exactly nothing.

Ouch. Sorry for this.

> 
> Interestingly, the problem doesn't trigger with nopti.  I assume this
> is because the kernel is mapped with global pages if we boot with
> nopti.  The sequence of operations when we create a new task is that
> we first load its mm while still running on the old stack (which
> crashes if the old stack is unmapped in the new mm unless the TLB
> saves us), then we call prepare_switch_to(), and then we switch to the
> new stack.  prepare_switch_to() pokes the new stack directly, which
> will populate the mapping through vmalloc_fault().  I assume that
> we're getting lucky on non-PTI systems -- the old stack's TLB entry
> stays alive long enough to make it all the way through
> prepare_switch_to() and switch_to() so that we make it to a valid
> stack.
> 
> Fixes: b50858ce3e2a ("x86/mm/vmalloc: Add 5-level paging support")
> Cc: stable@...r.kernel.org
> Reported-and-tested-by: Neil Berrington <neil.berrington@...acore.com>
> Signed-off-by: Andy Lutomirski <luto@...nel.org>
> ---
>  arch/x86/mm/tlb.c | 34 +++++++++++++++++++++++++++++-----
>  1 file changed, 29 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index a1561957dccb..5bfe61a5e8e3 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -151,6 +151,34 @@ void switch_mm(struct mm_struct *prev, struct mm_struct *next,
>  	local_irq_restore(flags);
>  }
>  
> +static void sync_current_stack_to_mm(struct mm_struct *mm)
> +{
> +	unsigned long sp = current_stack_pointer;
> +	pgd_t *pgd = pgd_offset(mm, sp);
> +
> +	if (CONFIG_PGTABLE_LEVELS > 4) {

Can we have

	if (PTRS_PER_P4D > 1)

here instead? This way I wouldn't need to touch the code again for
boot-time switching support.

-- 
 Kirill A. Shutemov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ