[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKwvOdnOh3H3ga2qpTktywvcgfXW5QJaB7r4XMhigmDzLhDNeA@mail.gmail.com>
Date: Wed, 12 Aug 2020 17:12:34 -0700
From: Nick Desaulniers <ndesaulniers@...gle.com>
To: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>
Cc: Arnd Bergmann <arnd@...db.de>, Borislav Petkov <bp@...en8.de>,
"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@...nel.org>,
"H. Peter Anvin" <hpa@...or.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Zhenzhong Duan <zhenzhong.duan@...cle.com>,
Kees Cook <keescook@...omium.org>,
Peter Zijlstra <peterz@...radead.org>,
Juergen Gross <jgross@...e.com>,
Andy Lutomirski <luto@...nel.org>,
Andrew Cooper <andrew.cooper3@...rix.com>,
LKML <linux-kernel@...r.kernel.org>,
clang-built-linux <clang-built-linux@...glegroups.com>
Subject: Re: [PATCH] x86: work around clang IAS bug referencing __force_order
On Thu, Aug 6, 2020 at 3:11 PM Thomas Gleixner <tglx@...utronix.de> wrote:
>
> Arnd Bergmann <arnd@...db.de> writes:
> > When using the clang integrated assembler, we get a reference
> > to __force_order that should normally get ignored in a few
> > rare cases:
> >
> > ERROR: modpost: "__force_order" [drivers/cpufreq/powernow-k6.ko] undefined!
> >
> > Add a 'static' definition so any file in which this happens can
> > have a local copy.
>
> That's a horrible hack.
Agreed. And static means everyone gets their own copy, rather than
sharing one memory address. I guess no one actually writes to it, so
it doesn't really matter, but __force_order just seems so strange to
me.
> And the only reason why it does not trigger -Wunused-variable warnings
> all over the place is because it's "referenced" in unused inline
> functions and then optimized out along with the unused inlines.
>
> > * It is not referenced from the code, but GCC < 5 with -fPIE would fail
> > * due to an undefined symbol. Define it to make these ancient GCCs
> > work.
>
> Bah, we really should have moved straight to GCC5 instead of upping it
> just to 4.9
>
> > + *
> > + * Clang sometimes fails to kill the reference to the dummy variable, so
> > + * provide an actual copy.
>
> Can that compiler be fixed instead?
I don't think so. The logic in the compiler whether to emit an
"address is significant" assembler directive is based on whether the
variable is "used." The "use" of `__force_order` is as output of all
of these control register read/write functions' inline asm, even
though the inline asm doesn't actually write to them. We'd have to
peek inside of the inline asm and build "use/def chains" for the
inline asm, to see that you don't actually use the output variable.
Best we can do is see it listed as an output to the inline asm
statement. And if you reference an `extern` variable, it should be no
wonder that you can get undefined symbol linkage failures.
I'd much rather remove all of __force_order.
>
> Aside of that is there a reason to make this 'static' thing wrapped in
> #ifdeffery? A quick check with GCC8.3 just works. But maybe 4.9 gets
> unhappy. Can't say due to: -ENOANCIENTCOMPILER :)
>From the comment in arch/x86/boot/compressed/pgtable_64.c, there's a
hint that maybe gcc < 5 and -pie (CONFIG_RANDOMIZE_BASE?) would fail
due to undefined symbol, though I'm not sure which symbol the comment
is referring to. If it's __force_order, then removing outright likely
fixes that issue.
Not sure about the comment in arch/x86/include/asm/special_insns.h
either; smells fishy like a bug with a compiler from a long time ago.
It looks like it was introduced in:
commit d3ca901f94b32 ("x86: unify paravirt parts of system.h")
Lore has this thread:
https://lore.kernel.org/lkml/4755A809.4050305@qumranet.com/
Patch 4: https://lore.kernel.org/lkml/11967844071346-git-send-email-gcosta@redhat.com/
It seems like there was a discussion about %cr8, but no one asked
"what's going on here with __force_order, is that right?"
Latest GCC release on December 4 2007 would have been GCC 4.2.2 according to:
https://gcc.gnu.org/releases.html
Quick boot test of the below works for me, though I should probably
test hosting a virtualized guest since d3ca901f94b32 refers to
paravirt. Thoughts?
--- a/arch/x86/boot/compressed/pgtable_64.c
+++ b/arch/x86/boot/compressed/pgtable_64.c
@@ -5,15 +5,6 @@
#include "pgtable.h"
#include "../string.h"
-/*
- * __force_order is used by special_insns.h asm code to force instruction
- * serialization.
- *
- * It is not referenced from the code, but GCC < 5 with -fPIE would fail
- * due to an undefined symbol. Define it to make these ancient GCCs work.
- */
-unsigned long __force_order;
-
#define BIOS_START_MIN 0x20000U /* 128K, less than
this is insane */
#define BIOS_START_MAX 0x9f000U /* 640K, absolute maximum */
diff --git a/arch/x86/include/asm/special_insns.h
b/arch/x86/include/asm/special_insns.h
index 59a3e13204c3..d2e0d53b0f69 100644
--- a/arch/x86/include/asm/special_insns.h
+++ b/arch/x86/include/asm/special_insns.h
@@ -10,46 +10,37 @@
#include <linux/irqflags.h>
#include <linux/jump_label.h>
-/*
- * Volatile isn't enough to prevent the compiler from reordering the
- * read/write functions for the control registers and messing everything up.
- * A memory clobber would solve the problem, but would prevent reordering of
- * all loads stores around it, which can hurt performance. Solution is to
- * use a variable and mimic reads and writes to it to enforce serialization
- */
-extern unsigned long __force_order;
-
void native_write_cr0(unsigned long val);
static inline unsigned long native_read_cr0(void)
{
unsigned long val;
- asm volatile("mov %%cr0,%0\n\t" : "=r" (val), "=m" (__force_order));
+ asm volatile("mov %%cr0,%0\n\t" : "=r" (val));
return val;
}
static __always_inline unsigned long native_read_cr2(void)
{
unsigned long val;
- asm volatile("mov %%cr2,%0\n\t" : "=r" (val), "=m" (__force_order));
+ asm volatile("mov %%cr2,%0\n\t" : "=r" (val));
return val;
}
static __always_inline void native_write_cr2(unsigned long val)
{
- asm volatile("mov %0,%%cr2": : "r" (val), "m" (__force_order));
+ asm volatile("mov %0,%%cr2": : "r" (val));
}
static inline unsigned long __native_read_cr3(void)
{
unsigned long val;
- asm volatile("mov %%cr3,%0\n\t" : "=r" (val), "=m" (__force_order));
+ asm volatile("mov %%cr3,%0\n\t" : "=r" (val));
return val;
}
static inline void native_write_cr3(unsigned long val)
{
- asm volatile("mov %0,%%cr3": : "r" (val), "m" (__force_order));
+ asm volatile("mov %0,%%cr3": : "r" (val));
}
static inline unsigned long native_read_cr4(void)
@@ -64,10 +55,10 @@ static inline unsigned long native_read_cr4(void)
asm volatile("1: mov %%cr4, %0\n"
"2:\n"
_ASM_EXTABLE(1b, 2b)
- : "=r" (val), "=m" (__force_order) : "0" (0));
+ : "=r" (val) : "0" (0));
#else
/* CR4 always exists on x86_64. */
- asm volatile("mov %%cr4,%0\n\t" : "=r" (val), "=m" (__force_order));
+ asm volatile("mov %%cr4,%0\n\t" : "=r" (val));
#endif
return val;
}
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 965474d78cef..ba9b5234cf44 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -358,7 +358,7 @@ void native_write_cr0(unsigned long val)
unsigned long bits_missing = 0;
set_register:
- asm volatile("mov %0,%%cr0": "+r" (val), "+m" (__force_order));
+ asm volatile("mov %0,%%cr0": "+r" (val));
if (static_branch_likely(&cr_pinning)) {
if (unlikely((val & X86_CR0_WP) != X86_CR0_WP)) {
--
Thanks,
~Nick Desaulniers
Powered by blists - more mailing lists