[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260129135426.93424-1-changwoo@igalia.com>
Date: Thu, 29 Jan 2026 22:54:26 +0900
From: Changwoo Min <changwoo@...lia.com>
To: Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>
Cc: Martin KaFai Lau <martin.lau@...ux.dev>,
Eduard Zingerman <eddyz87@...il.com>,
Song Liu <song@...nel.org>,
Yonghong Song <yonghong.song@...ux.dev>,
John Fastabend <john.fastabend@...il.com>,
KP Singh <kpsingh@...nel.org>,
Stanislav Fomichev <sdf@...ichev.me>,
Hao Luo <haoluo@...gle.com>,
Jiri Olsa <jolsa@...nel.org>,
Shuah Khan <shuah@...nel.org>,
kernel-dev@...lia.com,
bpf@...r.kernel.org,
sched-ext@...ts.linux.dev,
linux-kernel@...r.kernel.org,
linux-kselftest@...r.kernel.org,
Changwoo Min <changwoo@...lia.com>
Subject: [PATCH] selftests/bpf: Make x86 preempt_count access compatible across v6.14+
Recent x86 kernels (v6.15+) export __preempt_count as a ksym, while older
kernels expose the preemption counter via pcpu_hot.preempt_count. The
existing selftest helper unconditionally dereferenced __preempt_count,
which breaks BPF program loading on older kernels.
Make the x86 preemption count lookup version-agnostic by:
- Marking __preempt_count and pcpu_hot as weak ksyms.
- Introducing a BTF-described pcpu_hot___local layout with
preserve_access_index.
- Selecting the appropriate access path at runtime using ksym availability
and bpf_core_field_exists().
This allows a single BPF binary to run correctly on both v6.14-and-older
and v6.15-and-newer kernels without relying on compile-time version checks.
Fixes: 4b69e31329b6 ("selftests/bpf: Introduce experimental bpf_in_interrupt()")
Signed-off-by: Changwoo Min <changwoo@...lia.com>
---
tools/testing/selftests/bpf/bpf_experimental.h | 16 ++++++++++++++--
1 file changed, 14 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h
index a39576c8ba04..0194c0090e50 100644
--- a/tools/testing/selftests/bpf/bpf_experimental.h
+++ b/tools/testing/selftests/bpf/bpf_experimental.h
@@ -614,7 +614,13 @@ extern int bpf_cgroup_read_xattr(struct cgroup *cgroup, const char *name__str,
extern bool CONFIG_PREEMPT_RT __kconfig __weak;
#ifdef bpf_target_x86
-extern const int __preempt_count __ksym;
+extern const int __preempt_count __ksym __weak;
+
+struct pcpu_hot___local {
+ int preempt_count;
+} __attribute__((preserve_access_index));
+
+extern struct pcpu_hot___local pcpu_hot __ksym __weak;
#endif
struct task_struct___preempt_rt {
@@ -624,7 +630,13 @@ struct task_struct___preempt_rt {
static inline int get_preempt_count(void)
{
#if defined(bpf_target_x86)
- return *(int *) bpf_this_cpu_ptr(&__preempt_count);
+ /* v6.15 or later */
+ if (&__preempt_count)
+ return *(int *) bpf_this_cpu_ptr(&__preempt_count);
+ /* v6.14 or older */
+ if (bpf_core_field_exists(pcpu_hot.preempt_count))
+ return ((struct pcpu_hot___local *)
+ bpf_this_cpu_ptr(&pcpu_hot))->preempt_count;
#elif defined(bpf_target_arm64)
return bpf_get_current_task_btf()->thread_info.preempt.count;
#endif
--
2.52.0
Powered by blists - more mailing lists