lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <295DB0D1-CDFB-482C-93DF-63DAA36DAE22@vmware.com> Date: Sat, 21 Apr 2018 01:21:01 +0000 From: Nadav Amit <namit@...are.com> To: Dave Hansen <dave.hansen@...ux.intel.com> CC: LKML <linux-kernel@...r.kernel.org>, "open list:MEMORY MANAGEMENT" <linux-mm@...ck.org>, Fengguang Wu <fengguang.wu@...el.com>, Andrea Arcangeli <aarcange@...hat.com>, Andy Lutomirski <luto@...nel.org>, Arjan van de Ven <arjan@...ux.intel.com>, Borislav Petkov <bp@...en8.de>, Dan Williams <dan.j.williams@...el.com>, David Woodhouse <dwmw2@...radead.org>, Greg Kroah-Hartman <gregkh@...uxfoundation.org>, "hughd@...gle.com" <hughd@...gle.com>, "jpoimboe@...hat.com" <jpoimboe@...hat.com>, "jgross@...e.com" <jgross@...e.com>, "keescook@...gle.com" <keescook@...gle.com>, "torvalds@...ux-foundation.org" <torvalds@...ux-foundation.org>, "peterz@...radead.org" <peterz@...radead.org>, "tglx@...utronix.de" <tglx@...utronix.de>, "mingo@...nel.org" <mingo@...nel.org> Subject: Re: [PATCH 5/5] x86, pti: filter at vma->vm_page_prot population Dave Hansen <dave.hansen@...ux.intel.com> wrote: > > From: Dave Hansen <dave.hansen@...ux.intel.com> > > 0day reported warnings at boot on 32-bit systems without NX support: > > [ 12.349193] attempted to set unsupported pgprot: 8000000000000025 bits: 8000000000000000 supported: 7fffffffffffffff > [ 12.350792] WARNING: CPU: 0 PID: 1 at arch/x86/include/asm/pgtable.h:540 handle_mm_fault+0xfc1/0xfe0: > check_pgprot at arch/x86/include/asm/pgtable.h:535 > (inlined by) pfn_pte at arch/x86/include/asm/pgtable.h:549 > (inlined by) do_anonymous_page at mm/memory.c:3169 > (inlined by) handle_pte_fault at mm/memory.c:3961 > (inlined by) __handle_mm_fault at mm/memory.c:4087 > (inlined by) handle_mm_fault at mm/memory.c:4124 > > The problem was that we stopped massaging page permissions at PTE creation > time, so vma->vm_page_prot was passed unfiltered to PTE creation. > > To fix it, filter the page protections before they are installed in > vma->vm_page_prot. > > Signed-off-by: Dave Hansen <dave.hansen@...ux.intel.com> > Reported-by: Fengguang Wu <fengguang.wu@...el.com> > Fixes: fb43d6cb91 ("x86/mm: Do not auto-massage page protections") > Cc: Andrea Arcangeli <aarcange@...hat.com> > Cc: Andy Lutomirski <luto@...nel.org> > Cc: Arjan van de Ven <arjan@...ux.intel.com> > Cc: Borislav Petkov <bp@...en8.de> > Cc: Dan Williams <dan.j.williams@...el.com> > Cc: David Woodhouse <dwmw2@...radead.org> > Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org> > Cc: Hugh Dickins <hughd@...gle.com> > Cc: Josh Poimboeuf <jpoimboe@...hat.com> > Cc: Juergen Gross <jgross@...e.com> > Cc: Kees Cook <keescook@...gle.com> > Cc: Linus Torvalds <torvalds@...ux-foundation.org> > Cc: Nadav Amit <namit@...are.com> > Cc: Peter Zijlstra <peterz@...radead.org> > Cc: Thomas Gleixner <tglx@...utronix.de> > Cc: linux-mm@...ck.org > Cc: Ingo Molnar <mingo@...nel.org> > --- > > b/arch/x86/Kconfig | 4 ++++ > b/arch/x86/include/asm/pgtable.h | 5 +++++ > b/mm/mmap.c | 11 ++++++++++- > 3 files changed, 19 insertions(+), 1 deletion(-) > > diff -puN arch/x86/include/asm/pgtable.h~pti-glb-protection_map arch/x86/include/asm/pgtable.h > --- a/arch/x86/include/asm/pgtable.h~pti-glb-protection_map 2018-04-20 14:10:08.251749151 -0700 > +++ b/arch/x86/include/asm/pgtable.h 2018-04-20 14:10:08.260749151 -0700 > @@ -601,6 +601,11 @@ static inline pgprot_t pgprot_modify(pgp > > #define canon_pgprot(p) __pgprot(massage_pgprot(p)) > > +static inline pgprot_t arch_filter_pgprot(pgprot_t prot) > +{ > + return canon_pgprot(prot); > +} > + > static inline int is_new_memtype_allowed(u64 paddr, unsigned long size, > enum page_cache_mode pcm, > enum page_cache_mode new_pcm) > diff -puN arch/x86/Kconfig~pti-glb-protection_map arch/x86/Kconfig > --- a/arch/x86/Kconfig~pti-glb-protection_map 2018-04-20 14:10:08.253749151 -0700 > +++ b/arch/x86/Kconfig 2018-04-20 14:10:08.260749151 -0700 > @@ -52,6 +52,7 @@ config X86 > select ARCH_HAS_DEVMEM_IS_ALLOWED > select ARCH_HAS_ELF_RANDOMIZE > select ARCH_HAS_FAST_MULTIPLIER > + select ARCH_HAS_FILTER_PGPROT > select ARCH_HAS_FORTIFY_SOURCE > select ARCH_HAS_GCOV_PROFILE_ALL > select ARCH_HAS_KCOV if X86_64 > @@ -273,6 +274,9 @@ config ARCH_HAS_CPU_RELAX > config ARCH_HAS_CACHE_LINE_SIZE > def_bool y > > +config ARCH_HAS_FILTER_PGPROT > + def_bool y > + > config HAVE_SETUP_PER_CPU_AREA > def_bool y > > diff -puN mm/mmap.c~pti-glb-protection_map mm/mmap.c > --- a/mm/mmap.c~pti-glb-protection_map 2018-04-20 14:10:08.256749151 -0700 > +++ b/mm/mmap.c 2018-04-20 14:10:08.261749151 -0700 > @@ -100,11 +100,20 @@ pgprot_t protection_map[16] __ro_after_i > __S000, __S001, __S010, __S011, __S100, __S101, __S110, __S111 > }; > > +#ifndef CONFIG_ARCH_HAS_FILTER_PGPROT > +static inline pgprot_t arch_filter_pgprot(pgprot_t prot) > +{ > + return prot; > +} > +#endif > + > pgprot_t vm_get_page_prot(unsigned long vm_flags) > { > - return __pgprot(pgprot_val(protection_map[vm_flags & > + pgprot_t ret = __pgprot(pgprot_val(protection_map[vm_flags & > (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)]) | > pgprot_val(arch_vm_get_page_prot(vm_flags))); > + > + return arch_filter_pgprot(ret); > } > EXPORT_SYMBOL(vm_get_page_prot); Wouldn’t it be simpler or at least cleaner to change the protection map if NX is not supported? I presume it can be done paging_init() similarly to the way other archs (e.g., arm, mips) do.
Powered by blists - more mailing lists