[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <6c9852055cae7d54ce33df77c5c7a1dc@kernel.org>
Date: Fri, 24 Oct 2025 13:33:50 -1000
From: Tejun Heo <tj@...nel.org>
To: David Vernet <void@...ifault.com>, Andrea Righi <arighi@...dia.com>, Changwoo Min <changwoo@...lia.com>
Cc: sched-ext@...ts.linux.dev, linux-kernel@...r.kernel.org, bpf@...r.kernel.org, Andrii Nakryiko <andrii@...nel.org>
Subject: [PATCH sched_ext/for-6.19] sched_ext: Add ___compat suffix to scx_bpf_dsq_insert___v2 in compat.bpf.h
2dbbdeda77a6 ("sched_ext: Fix scx_bpf_dsq_insert() backward binary
compatibility") renamed the new bool-returning variant to scx_bpf_dsq_insert___v2
in the kernel. However, libbpf currently only strips ___SUFFIX on the BPF side,
not on kernel symbols, so the compat wrapper couldn't match the kernel kfunc and
would always fall back to the old variant even when the new one was available.
Add an extra ___compat suffix as a workaround - libbpf strips one suffix on the
BPF side leaving ___v2, which then matches the kernel kfunc directly. In the
future when libbpf strips all suffixes on both sides, all suffixes can be
dropped.
Fixes: 2dbbdeda77a6 ("sched_ext: Fix scx_bpf_dsq_insert() backward binary compatibility")
Cc: Andrii Nakryiko <andrii@...nel.org>
Signed-off-by: Tejun Heo <tj@...nel.org>
---
tools/sched_ext/include/scx/compat.bpf.h | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
--- a/tools/sched_ext/include/scx/compat.bpf.h
+++ b/tools/sched_ext/include/scx/compat.bpf.h
@@ -237,15 +237,17 @@ scx_bpf_dsq_insert_vtime(struct task_struct *p, u64 dsq_id, u64 slice, u64 vtime
/*
* v6.19: scx_bpf_dsq_insert() now returns bool instead of void. Move
* scx_bpf_dsq_insert() decl to common.bpf.h and drop compat helper after v6.22.
+ * The extra ___compat suffix is to work around libbpf not ignoring __SUFFIX on
+ * kernel side. The entire suffix can be dropped later.
*/
-bool scx_bpf_dsq_insert___v2(struct task_struct *p, u64 dsq_id, u64 slice, u64 enq_flags) __ksym __weak;
+bool scx_bpf_dsq_insert___v2___compat(struct task_struct *p, u64 dsq_id, u64 slice, u64 enq_flags) __ksym __weak;
void scx_bpf_dsq_insert___v1(struct task_struct *p, u64 dsq_id, u64 slice, u64 enq_flags) __ksym __weak;
static inline bool
scx_bpf_dsq_insert(struct task_struct *p, u64 dsq_id, u64 slice, u64 enq_flags)
{
- if (bpf_ksym_exists(scx_bpf_dsq_insert___v2)) {
- return scx_bpf_dsq_insert___v2(p, dsq_id, slice, enq_flags);
+ if (bpf_ksym_exists(scx_bpf_dsq_insert___v2___compat)) {
+ return scx_bpf_dsq_insert___v2___compat(p, dsq_id, slice, enq_flags);
} else {
scx_bpf_dsq_insert___v1(p, dsq_id, slice, enq_flags);
return true;
Powered by blists - more mailing lists