[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKwvOdmUfAg9cP4tHV7tXC8PtcumehZ99+wqdcmkTR5a6LORrw@mail.gmail.com>
Date: Tue, 16 Jul 2019 11:15:54 -0700
From: Nick Desaulniers <ndesaulniers@...gle.com>
To: Josh Poimboeuf <jpoimboe@...hat.com>,
Miguel Ojeda <miguel.ojeda.sandonis@...il.com>
Cc: "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@...nel.org>,
LKML <linux-kernel@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Arnd Bergmann <arnd@...db.de>, Jann Horn <jannh@...gle.com>,
Randy Dunlap <rdunlap@...radead.org>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>
Subject: Re: [PATCH 10/22] bpf: Disable GCC -fgcse optimization for ___bpf_prog_run()
On Sun, Jul 14, 2019 at 5:37 PM Josh Poimboeuf <jpoimboe@...hat.com> wrote:
>
> On x86-64, with CONFIG_RETPOLINE=n, GCC's "global common subexpression
> elimination" optimization results in ___bpf_prog_run()'s jumptable code
> changing from this:
>
> select_insn:
> jmp *jumptable(, %rax, 8)
> ...
> ALU64_ADD_X:
> ...
> jmp *jumptable(, %rax, 8)
> ALU_ADD_X:
> ...
> jmp *jumptable(, %rax, 8)
>
> to this:
>
> select_insn:
> mov jumptable, %r12
> jmp *(%r12, %rax, 8)
> ...
> ALU64_ADD_X:
> ...
> jmp *(%r12, %rax, 8)
> ALU_ADD_X:
> ...
> jmp *(%r12, %rax, 8)
>
> The jumptable address is placed in a register once, at the beginning of
> the function. The function execution can then go through multiple
> indirect jumps which rely on that same register value. This has a few
> issues:
>
> 1) Objtool isn't smart enough to be able to track such a register value
> across multiple recursive indirect jumps through the jump table.
>
> 2) With CONFIG_RETPOLINE enabled, this optimization actually results in
> a small slowdown. I measured a ~4.7% slowdown in the test_bpf
> "tcpdump port 22" selftest.
>
> This slowdown is actually predicted by the GCC manual:
>
> Note: When compiling a program using computed gotos, a GCC
> extension, you may get better run-time performance if you
> disable the global common subexpression elimination pass by
> adding -fno-gcse to the command line.
>
> So just disable the optimization for this function.
>
> Fixes: e55a73251da3 ("bpf: Fix ORC unwinding in non-JIT BPF code")
> Reported-by: Randy Dunlap <rdunlap@...radead.org>
> Signed-off-by: Josh Poimboeuf <jpoimboe@...hat.com>
> Acked-by: Alexei Starovoitov <ast@...nel.org>
> ---
> Cc: Alexei Starovoitov <ast@...nel.org>
> Cc: Daniel Borkmann <daniel@...earbox.net>
> ---
> include/linux/compiler-gcc.h | 2 ++
> include/linux/compiler_types.h | 4 ++++
> kernel/bpf/core.c | 2 +-
> 3 files changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h
> index e8579412ad21..d7ee4c6bad48 100644
> --- a/include/linux/compiler-gcc.h
> +++ b/include/linux/compiler-gcc.h
> @@ -170,3 +170,5 @@
> #else
> #define __diag_GCC_8(s)
> #endif
> +
> +#define __no_fgcse __attribute__((optimize("-fno-gcse")))
+ Miguel, maintainer of compiler_attributes.h
I wonder if the optimize attributes can be feature detected?
Is -fno-gcse supported all the way back to GCC 4.6?
> diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h
> index 095d55c3834d..599c27b56c29 100644
> --- a/include/linux/compiler_types.h
> +++ b/include/linux/compiler_types.h
> @@ -189,6 +189,10 @@ struct ftrace_likely_data {
> #define asm_volatile_goto(x...) asm goto(x)
> #endif
>
> +#ifndef __no_fgcse
> +# define __no_fgcse
> +#endif
> +
> /* Are two types/vars the same type (ignoring qualifiers)? */
> #define __same_type(a, b) __builtin_types_compatible_p(typeof(a), typeof(b))
>
> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> index 7e98f36a14e2..8191a7db2777 100644
> --- a/kernel/bpf/core.c
> +++ b/kernel/bpf/core.c
> @@ -1295,7 +1295,7 @@ bool bpf_opcode_in_insntable(u8 code)
> *
> * Decode and execute eBPF instructions.
> */
> -static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack)
> +static u64 __no_fgcse ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack)
> {
> #define BPF_INSN_2_LBL(x, y) [BPF_##x | BPF_##y] = &&x##_##y
> #define BPF_INSN_3_LBL(x, y, z) [BPF_##x | BPF_##y | BPF_##z] = &&x##_##y##_##z
> --
> 2.20.1
>
--
Thanks,
~Nick Desaulniers
Powered by blists - more mailing lists