[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z8DKSgzZB5HZgYN8@slm.duckdns.org>
Date: Thu, 27 Feb 2025 10:25:46 -1000
From: Tejun Heo <tj@...nel.org>
To: Juntong Deng <juntong.deng@...look.com>
Cc: ast@...nel.org, daniel@...earbox.net, john.fastabend@...il.com,
andrii@...nel.org, martin.lau@...ux.dev, eddyz87@...il.com,
song@...nel.org, yonghong.song@...ux.dev, kpsingh@...nel.org,
sdf@...ichev.me, haoluo@...gle.com, jolsa@...nel.org,
memxor@...il.com, void@...ifault.com, arighi@...dia.com,
changwoo@...lia.com, bpf@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH sched_ext/for-6.15 v3 3/5] sched_ext: Add
scx_kfunc_ids_ops_context_sensitive for unified filtering of
context-sensitive SCX kfuncs
Hello,
On Wed, Feb 26, 2025 at 07:28:18PM +0000, Juntong Deng wrote:
> +BTF_KFUNCS_START(scx_kfunc_ids_ops_context_sensitive)
> +/* scx_kfunc_ids_select_cpu */
> +BTF_ID_FLAGS(func, scx_bpf_select_cpu_dfl, KF_RCU)
> +/* scx_kfunc_ids_enqueue_dispatch */
> +BTF_ID_FLAGS(func, scx_bpf_dsq_insert, KF_RCU)
> +BTF_ID_FLAGS(func, scx_bpf_dsq_insert_vtime, KF_RCU)
> +BTF_ID_FLAGS(func, scx_bpf_dispatch, KF_RCU)
> +BTF_ID_FLAGS(func, scx_bpf_dispatch_vtime, KF_RCU)
> +/* scx_kfunc_ids_dispatch */
> +BTF_ID_FLAGS(func, scx_bpf_dispatch_nr_slots)
> +BTF_ID_FLAGS(func, scx_bpf_dispatch_cancel)
> +BTF_ID_FLAGS(func, scx_bpf_dsq_move_to_local)
> +BTF_ID_FLAGS(func, scx_bpf_consume)
> +/* scx_kfunc_ids_cpu_release */
> +BTF_ID_FLAGS(func, scx_bpf_reenqueue_local)
> +/* scx_kfunc_ids_unlocked */
> +BTF_ID_FLAGS(func, scx_bpf_create_dsq, KF_SLEEPABLE)
> +/* Intersection of scx_kfunc_ids_dispatch and scx_kfunc_ids_unlocked */
> +BTF_ID_FLAGS(func, scx_bpf_dsq_move_set_slice)
> +BTF_ID_FLAGS(func, scx_bpf_dsq_move_set_vtime)
> +BTF_ID_FLAGS(func, scx_bpf_dsq_move, KF_RCU)
> +BTF_ID_FLAGS(func, scx_bpf_dsq_move_vtime, KF_RCU)
> +BTF_ID_FLAGS(func, scx_bpf_dispatch_from_dsq_set_slice)
> +BTF_ID_FLAGS(func, scx_bpf_dispatch_from_dsq_set_vtime)
> +BTF_ID_FLAGS(func, scx_bpf_dispatch_from_dsq, KF_RCU)
> +BTF_ID_FLAGS(func, scx_bpf_dispatch_vtime_from_dsq, KF_RCU)
> +BTF_KFUNCS_END(scx_kfunc_ids_ops_context_sensitive)
I'm not a big fan of repeating the kfuncs. This is going to be error-prone.
Can't it register and test the existing sets in the filter function instead?
If that's too cumbersome, maybe we can put these in a macro so that we don't
have to repeat the functions?
> +static int scx_kfunc_ids_ops_context_sensitive_filter(const struct bpf_prog *prog, u32 kfunc_id)
> +{
> + u32 moff, flags;
> +
> + if (!btf_id_set8_contains(&scx_kfunc_ids_ops_context_sensitive, kfunc_id))
> + return 0;
> +
> + if (prog->type == BPF_PROG_TYPE_SYSCALL &&
> + btf_id_set8_contains(&scx_kfunc_ids_unlocked, kfunc_id))
> + return 0;
Not from this change but these can probably be allowed from TRACING too.
> + if (prog->type == BPF_PROG_TYPE_STRUCT_OPS &&
> + prog->aux->st_ops != &bpf_sched_ext_ops)
> + return 0;
Why can't other struct_ops progs call scx_kfunc_ids_unlocked kfuncs?
> + /* prog->type == BPF_PROG_TYPE_STRUCT_OPS && prog->aux->st_ops == &bpf_sched_ext_ops*/
> +
> + moff = prog->aux->attach_st_ops_member_off;
> + flags = scx_ops_context_flags[SCX_MOFF_IDX(moff)];
> +
> + if ((flags & SCX_OPS_KF_UNLOCKED) &&
> + btf_id_set8_contains(&scx_kfunc_ids_unlocked, kfunc_id))
> + return 0;
Wouldn't this disallow e.g. ops.dispatch() from calling scx_dsq_move()?
Have you tested that the before and after behaviors match?
Thanks.
--
tejun
Powered by blists - more mailing lists