lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55a35a3d8fba417aabe63ad13d519198@AcuMS.aculab.com>
Date:   Wed, 2 Sep 2020 15:58:38 +0000
From:   David Laight <David.Laight@...LAB.COM>
To:     'Arvind Sankar' <nivedita@...m.mit.edu>,
        Linus Torvalds <torvalds@...ux-foundation.org>
CC:     Miguel Ojeda <miguel.ojeda.sandonis@...il.com>,
        Sedat Dilek <sedat.dilek@...il.com>,
        Segher Boessenkool <segher@...nel.crashing.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Nick Desaulniers <ndesaulniers@...gle.com>,
        "Paul E. McKenney" <paulmck@...nel.org>,
        "Ingo Molnar" <mingo@...hat.com>, Arnd Bergmann <arnd@...db.de>,
        Borislav Petkov <bp@...en8.de>,
        "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@...nel.org>,
        "H. Peter Anvin" <hpa@...or.com>,
        "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
        Kees Cook <keescook@...omium.org>,
        "Peter Zijlstra" <peterz@...radead.org>,
        Juergen Gross <jgross@...e.com>,
        "Andy Lutomirski" <luto@...nel.org>,
        Andrew Cooper <andrew.cooper3@...rix.com>,
        LKML <linux-kernel@...r.kernel.org>,
        clang-built-linux <clang-built-linux@...glegroups.com>,
        Will Deacon <will@...nel.org>,
        "nadav.amit@...il.com" <nadav.amit@...il.com>,
        Nathan Chancellor <natechancellor@...il.com>
Subject: RE: [PATCH v2] x86/asm: Replace __force_order with memory clobber

From: Arvind Sankar
> Sent: 02 September 2020 16:34
> 
> The CRn accessor functions use __force_order as a dummy operand to
> prevent the compiler from reordering the inline asm.
> 
> The fact that the asm is volatile should be enough to prevent this
> already, however older versions of GCC had a bug that could sometimes
> result in reordering. This was fixed in 8.1, 7.3 and 6.5. Versions prior
> to these, including 5.x and 4.9.x, may reorder volatile asm.
> 
> There are some issues with __force_order as implemented:
> - It is used only as an input operand for the write functions, and hence
>   doesn't do anything additional to prevent reordering writes.
> - It allows memory accesses to be cached/reordered across write
>   functions, but CRn writes affect the semantics of memory accesses, so
>   this could be dangerous.
> - __force_order is not actually defined in the kernel proper, but the
>   LLVM toolchain can in some cases require a definition: LLVM (as well
>   as GCC 4.9) requires it for PIE code, which is why the compressed
>   kernel has a definition, but also the clang integrated assembler may
>   consider the address of __force_order to be significant, resulting in
>   a reference that requires a definition.
> 
> Fix this by:
> - Using a memory clobber for the write functions to additionally prevent
>   caching/reordering memory accesses across CRn writes.
> - Using a dummy input operand with an arbitrary constant address for the
>   read functions, instead of a global variable. This will prevent reads
>   from being reordered across writes, while allowing memory loads to be
>   cached/reordered across CRn reads, which should be safe.

How much does using a full memory clobber for the reads cost?

It would remove any chance that the compiler decides it needs to
get the address of the 'dummy' location into a register so that
it can be used as a memory reference in a generated instruction
(which is probably what was happening for PIE compiles).

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ