lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Sat, 22 Aug 2020 21:16:52 -0400 From: Arvind Sankar <nivedita@...m.mit.edu> To: Linus Torvalds <torvalds@...ux-foundation.org> Cc: Arvind Sankar <nivedita@...m.mit.edu>, Miguel Ojeda <miguel.ojeda.sandonis@...il.com>, Sedat Dilek <sedat.dilek@...il.com>, Segher Boessenkool <segher@...nel.crashing.org>, Thomas Gleixner <tglx@...utronix.de>, Nick Desaulniers <ndesaulniers@...gle.com>, "Paul E. McKenney" <paulmck@...nel.org>, Ingo Molnar <mingo@...hat.com>, Arnd Bergmann <arnd@...db.de>, Borislav Petkov <bp@...en8.de>, "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@...nel.org>, "H. Peter Anvin" <hpa@...or.com>, "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>, Zhenzhong Duan <zhenzhong.duan@...cle.com>, Kees Cook <keescook@...omium.org>, Peter Zijlstra <peterz@...radead.org>, Juergen Gross <jgross@...e.com>, Andy Lutomirski <luto@...nel.org>, Andrew Cooper <andrew.cooper3@...rix.com>, LKML <linux-kernel@...r.kernel.org>, clang-built-linux <clang-built-linux@...glegroups.com>, Will Deacon <will@...nel.org> Subject: Re: [PATCH] x86: work around clang IAS bug referencing __force_order On Sat, Aug 22, 2020 at 05:10:21PM -0700, Linus Torvalds wrote: > On Sat, Aug 22, 2020 at 4:11 PM Arvind Sankar <nivedita@...m.mit.edu> wrote: > > > > Actually, is a memory clobber required for correctness? Memory accesses > > probably shouldn't be reordered across a CRn write. Is asm volatile > > enough to stop that or do you need a memory clobber? > > You do need a memory clobber if you really care about ordering wrt > normal memory references. > > That said, I'm not convinced we do care here. Normal memory accesses > (as seen by the compiler) should be entirely immune to any changes we > do wrt CRx registers. > > Because code that really fundamentally changes kernel mappings or > access rules is already written in low-level assembler (eg the entry > routines or bootup). > > Anything that relies on the more subtle changes (ie user space > accesses etc) should already be ordered by other things - usually by > the fact that they are also "asm volatile". > > But hey, maybe somebody can come up with an exception to that. > > Linus I'm sure in practice it can't happen, as any memory accesses happening immediately around write_cr3() are probably mapped the same in both pagetables anyway, but eg cleanup_trampoline() in arch/x86/boot/compressed/pgtable_64.c: memcpy(pgtable, trampoline_pgtable, PAGE_SIZE); native_write_cr3((unsigned long)pgtable); There'll probably be trouble if the compiler were to reverse the order here. We could actually make write_crn() use memory clobber, and read_crn() use "m"(*(int *)0x1000) as an input operand. A bit hacky, but no global variable needed. And maybe read_crn() doesn't even have to be volatile. Also, if we look at the rdmsr/wrmsr pair, there's no force_order equivalent AFAICT. wrmsr has a memory clobber, but the asm volatile-ness is the only thing enforcing read/write ordering.
Powered by blists - more mailing lists