[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAADnVQL2huFSNAn4Pkbx2GOqAB=Z-rtd+Fp3BnJTZ-tbzOhgmw@mail.gmail.com>
Date: Thu, 29 Jan 2026 09:21:30 -0800
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Changwoo Min <changwoo@...lia.com>
Cc: Alexei Starovoitov <ast@...nel.org>, Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>, Martin KaFai Lau <martin.lau@...ux.dev>,
Eduard Zingerman <eddyz87@...il.com>, Song Liu <song@...nel.org>,
Yonghong Song <yonghong.song@...ux.dev>, John Fastabend <john.fastabend@...il.com>,
KP Singh <kpsingh@...nel.org>, Stanislav Fomichev <sdf@...ichev.me>, Hao Luo <haoluo@...gle.com>,
Jiri Olsa <jolsa@...nel.org>, Shuah Khan <shuah@...nel.org>, kernel-dev@...lia.com,
bpf <bpf@...r.kernel.org>, sched-ext@...ts.linux.dev,
LKML <linux-kernel@...r.kernel.org>,
"open list:KERNEL SELFTEST FRAMEWORK" <linux-kselftest@...r.kernel.org>
Subject: Re: [PATCH] selftests/bpf: Make x86 preempt_count access compatible
across v6.14+
On Thu, Jan 29, 2026 at 5:54 AM Changwoo Min <changwoo@...lia.com> wrote:
>
> Recent x86 kernels (v6.15+) export __preempt_count as a ksym, while older
> kernels expose the preemption counter via pcpu_hot.preempt_count. The
> existing selftest helper unconditionally dereferenced __preempt_count,
> which breaks BPF program loading on older kernels.
>
> Make the x86 preemption count lookup version-agnostic by:
> - Marking __preempt_count and pcpu_hot as weak ksyms.
> - Introducing a BTF-described pcpu_hot___local layout with
> preserve_access_index.
> - Selecting the appropriate access path at runtime using ksym availability
> and bpf_core_field_exists().
>
> This allows a single BPF binary to run correctly on both v6.14-and-older
> and v6.15-and-newer kernels without relying on compile-time version checks.
See.. with bpf approach instead of kfunc this new helpers
can work on old kernels without backporting kfuncs :)
> Fixes: 4b69e31329b6 ("selftests/bpf: Introduce experimental bpf_in_interrupt()")
fixes tag is not appropriate. It's not a bug fix.
> Signed-off-by: Changwoo Min <changwoo@...lia.com>
> ---
> tools/testing/selftests/bpf/bpf_experimental.h | 16 ++++++++++++++--
> 1 file changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h
> index a39576c8ba04..0194c0090e50 100644
> --- a/tools/testing/selftests/bpf/bpf_experimental.h
> +++ b/tools/testing/selftests/bpf/bpf_experimental.h
> @@ -614,7 +614,13 @@ extern int bpf_cgroup_read_xattr(struct cgroup *cgroup, const char *name__str,
>
> extern bool CONFIG_PREEMPT_RT __kconfig __weak;
> #ifdef bpf_target_x86
> -extern const int __preempt_count __ksym;
> +extern const int __preempt_count __ksym __weak;
> +
> +struct pcpu_hot___local {
> + int preempt_count;
> +} __attribute__((preserve_access_index));
> +
> +extern struct pcpu_hot___local pcpu_hot __ksym __weak;
> #endif
>
> struct task_struct___preempt_rt {
> @@ -624,7 +630,13 @@ struct task_struct___preempt_rt {
> static inline int get_preempt_count(void)
> {
> #if defined(bpf_target_x86)
> - return *(int *) bpf_this_cpu_ptr(&__preempt_count);
> + /* v6.15 or later */
> + if (&__preempt_count)
> + return *(int *) bpf_this_cpu_ptr(&__preempt_count);
please use bpf_ksym_exists().
It helps to catch missing __weak. This patch adds it,
but let's demonstrate best coding practices.
> + /* v6.14 or older */
> + if (bpf_core_field_exists(pcpu_hot.preempt_count))
> + return ((struct pcpu_hot___local *)
> + bpf_this_cpu_ptr(&pcpu_hot))->preempt_count;
iirc pcpu_hot approach was there for a short time.
Like 5.x kernel didn't have it. It was per-cpu var too.
Pls adjust the comment.
pw-bot: cr
Powered by blists - more mailing lists