[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fd14097e-31c5-0c0e-cbe5-d399299ca489@fb.com>
Date: Fri, 20 Apr 2018 11:06:03 -0700
From: Yonghong Song <yhs@...com>
To: Peter Zijlstra <peterz@...radead.org>
CC: <mingo@...nel.org>, <ast@...com>, <daniel@...earbox.net>,
<linux-kernel@...r.kernel.org>, <x86@...nel.org>,
<kernel-team@...com>
Subject: Re: [PATCH v2] x86/cpufeature: guard asm_volatile_goto usage with
NO_BPF_WORKAROUND
On 4/20/18 1:19 AM, Peter Zijlstra wrote:
> On Sat, Apr 14, 2018 at 09:27:38PM -0700, Yonghong Song wrote:
>
>> This patch adds a preprocessor guard NO_BPF_WORKAROUND around the
>> asm_volatile_goto based static_cpu_has(). NO_BPF_WORKAROUND is set
>> at toplevel Makefile when compiler supports asm-goto. That is,
>> if the compiler supports asm-goto, the kernel build will use
>> asm-goto version of static_cpu_has().
>
> Hurm, so adding __BPF__ for BPF compiles isn't an option? It seems to me
> having a CPP flag to identify BPF compile context might be useful in
> general.
With "clang -target bpf", we already have __BPF__ defined.
For tracing, esp. ptrace.h is included, "clang -target <native_arch>"
where "-target <native_arch>" can be omitted, is typically used.
The reason is the native architecture header files typically
include a lot of various asm related stuff where "-target bpf" cannot
really handle. We relay on native clang to flush out all these
asm constructs and only bpf program needed stuff survives
reach to backend compiler.
The backend compiler, llc, will have option "-march=bpf" to do
right thing to generate bpf byte codes.
So the patch is really a workaround for "clang -target x86_64" with
intention of using "llc -march=bpf" later on.
Powered by blists - more mailing lists