lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1544708660.18411.15.camel@lca.pw>
Date:   Thu, 13 Dec 2018 08:44:20 -0500
From:   Qian Cai <cai@....pw>
To:     James Morse <james.morse@....com>
Cc:     catalin.marinas@....com, will.deacon@....com, marc.zyngier@....com,
        takahiro.akashi@...aro.org, ard.biesheuvel@...aro.org,
        linux-arm-kernel@...ts.infradead.org, kexec@...ts.infradead.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] arm64: invalidate TLB before turning MMU on

On Thu, 2018-12-13 at 10:44 +0000, James Morse wrote:
> The kernel should already handle this, as we don't trust the bootloader to
> clean
> up either.
> 
> In arch/arm64/mm/proc.S::__cpu_setup()
> > /*
> > *	__cpu_setup
> > *
> > *	Initialise the processor for turning the MMU on.  Return in x0 the
> > *	value of the SCTLR_EL1 register.
> > */
> > 	.pushsection ".idmap.text", "awx"
> > ENTRY(__cpu_setup)
> > 	tlbi	vmalle1				// Invalidate local
> > TLB
> > 	dsb	nsh
> 
> This is called from stext, which then branches to __primary_switch(), which
> calls __enable_mmu() where you see this problem. It shouldn't not be possible
> to
> allocate new tlb entries between these points...
> 
> Do you have CONFIG_RANDOMIZE_BASE disabled? This causes enable_mmu() to be
> called twice, the extra tlb maintenance is in __primary_switch.
> (if it works with this turned off, it points to the extra off/tlbi/on
> sequence).

Yes, CONFIG_RANDOMIZE_BASE is NOT set.

> 
> 
> > diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> > index 4471f570a295..5196f3d729de 100644
> > --- a/arch/arm64/kernel/head.S
> > +++ b/arch/arm64/kernel/head.S
> > @@ -771,6 +771,10 @@ ENTRY(__enable_mmu)
> >  	msr	ttbr0_el1, x2			// load TTBR0
> >  	msr	ttbr1_el1, x1			// load TTBR1
> >  	isb
> > +	dsb	nshst
> > +	tlbi	vmalle1				// invalidate
> > TLB
> > +	dsb	nsh
> > +	isb
> >  	msr	sctlr_el1, x0
> >  	isb
> 
> The overall change here is that we do extra maintenance later.
> 
> Can move this around to bisect where the TLB entries are either coming from,
> or
> failing-to-be invalidated?
> Do your first and kdump kernels have the same VA_BITS/PAGE_SIZE?

Yes,

CONFIG_ARM64_VA_BITS=48
CONFIG_ARM64_PAGE_SHIFT=16
# CONFIG_ARM64_4K_PAGES is not set
# CONFIG_ARM64_16K_PAGES is not set
CONFIG_ARM64_64K_PAGES=y

> As a stab in the dark, (totally untested):
> ------------------------------%<------------------------------
> diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
> index 2c75b0b903ae..a5f3b7314bda 100644
> --- a/arch/arm64/mm/proc.S
> +++ b/arch/arm64/mm/proc.S
> @@ -406,9 +406,6 @@ ENDPROC(idmap_kpti_install_ng_mappings)
>   */
>         .pushsection ".idmap.text", "awx"
>  ENTRY(__cpu_setup)
> -       tlbi    vmalle1                         // Invalidate local TLB
> -       dsb     nsh
> -
>         mov     x0, #3 << 20
>         msr     cpacr_el1, x0                   // Enable FP/ASIMD
>         mov     x0, #1 << 12                    // Reset mdscr_el1 and disable
> @@ -465,5 +462,10 @@ ENTRY(__cpu_setup)
>  1:
>  #endif /* CONFIG_ARM64_HW_AFDBM */
>         msr     tcr_el1, x10
> +       isb
> +
> +       tlbi    vmalle1                         // Invalidate local TLB
> +       dsb     nsh
> +
>         ret                                     // return to head.S
>  ENDPROC(__cpu_setup)
> ------------------------------%<------------------------------
> 

This patch works well too.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ