[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aA1BJewva-MMTabR@gpd3>
Date: Sat, 26 Apr 2025 22:25:09 +0200
From: Andrea Righi <arighi@...dia.com>
To: Tejun Heo <tj@...nel.org>
Cc: void@...ifault.com, multics69@...il.com, linux-kernel@...r.kernel.org,
sched-ext@...a.com
Subject: Re: [PATCH 06/12] sched_ext: Move dsq_hash into scx_sched
Hi Tejun,
On Fri, Apr 25, 2025 at 11:58:21AM -1000, Tejun Heo wrote:
> User DSQs are going to become per scheduler instance. Move dsq_hash into
> scx_sched. This shifts the code that assumes scx_root to be the only
> scx_sched instance up the call stack but doesn't remove them yet.
>
> Signed-off-by: Tejun Heo <tj@...nel.org>
> ---
...
> @@ -6858,7 +6889,11 @@ __bpf_kfunc s32 scx_bpf_dsq_nr_queued(u64 dsq_id)
> */
> __bpf_kfunc void scx_bpf_destroy_dsq(u64 dsq_id)
> {
> - destroy_dsq(dsq_id);
> + struct scx_sched *sch;
> +
> + sch = rcu_dereference(scx_root);
> + if (sch)
> + destroy_dsq(sch, dsq_id);
> }
>
> /**
I just triggered the following lockdep splat running the create_dsq
selftest. If we call scx_bpf_destroy_dsq() from ops.init() we're missing
rcu_read_lock/unlock(), should we just add that?
arighi@...tme-ng~/s/l/t/t/s/sched_ext (scx)> sudo ./runner -t create_dsq
===== START =====
TEST: create_dsq
DESCRIPTION: Create and destroy a dsq in a loop
OUTPUT:
[ 72.890532]
[ 72.890621] =============================
[ 72.890652] WARNING: suspicious RCU usage
[ 72.890683] 6.14.0-virtme #33 Not tainted
[ 72.890720] -----------------------------
[ 72.890754] kernel/sched/ext.c:6879 suspicious rcu_dereference_check() usage!
[ 72.890819]
[ 72.890819] other info that might help us debug this:
[ 72.890819]
[ 72.890879]
[ 72.890879] rcu_scheduler_active = 2, debug_locks = 1
[ 72.890935] 4 locks held by runner/2097:
[ 72.890967] #0: ffffffffb239d968 (update_mutex){+.+.}-{4:4}, at: bpf_struct_ops_link_create+0x112/0x180
[ 72.891050] #1: ffffffffb228aa68 (scx_enable_mutex){+.+.}-{4:4}, at: scx_enable.isra.0+0x65/0x1420
[ 72.891141] #2: ffffffffb2274c90 (cpu_hotplug_lock){++++}-{0:0}, at: scx_enable.isra.0+0x516/0x1420
[ 72.891242] #3: ffffffffb236fb80 (rcu_read_lock_trace){....}-{0:0}, at: __bpf_prog_enter_sleepable+0x27/0xa0
[ 72.891331]
[ 72.891331] stack backtrace:
[ 72.891377] CPU: 1 UID: 0 PID: 2097 Comm: runner Not tainted 6.14.0-virtme #33 PREEMPT(full)
[ 72.891379] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Arch Linux 1.16.3-1-1 04/01/2014
[ 72.891380] Sched_ext: create_dsq (enabling)
[ 72.891381] Call Trace:
[ 72.891383] <TASK>
[ 72.891385] dump_stack_lvl+0x9e/0xe0
[ 72.891390] lockdep_rcu_suspicious+0x14a/0x1b0
[ 72.891396] scx_bpf_destroy_dsq+0x71/0x80
[ 72.891401] bpf_prog_4b98ae790b57e181_create_dsq_init+0xcd/0xe0
[ 72.891403] ? __bpf_prog_enter_sleepable+0x27/0xa0
[ 72.891407] bpf__sched_ext_ops_init+0x40/0xa4
[ 72.891411] ? scx_idle_enable+0xf0/0x130
[ 72.891414] scx_enable.isra.0+0x54b/0x1420
[ 72.891440] bpf_struct_ops_link_create+0x12c/0x180
[ 72.891447] __sys_bpf+0x1fdd/0x2a90
[ 72.891470] __x64_sys_bpf+0x1e/0x30
[ 72.891473] do_syscall_64+0xbb/0x1d0
[ 72.891477] entry_SYSCALL_64_after_hwframe+0x77/0x7f
[ 72.891479] RIP: 0033:0x7f82b9508fad
[ 72.891481] Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 2b 7d 0c 00 f7 d8 64 89 01 48
[ 72.891482] RSP: 002b:00007ffcd032fb58 EFLAGS: 00000206 ORIG_RAX: 0000000000000141
[ 72.891483] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f82b9508fad
[ 72.891484] RDX: 0000000000000040 RSI: 00007ffcd032fc40 RDI: 000000000000001c
[ 72.891484] RBP: 00007ffcd032fb70 R08: 00007ffcd032fc40 R09: 00007ffcd032fc40
[ 72.891485] R10: 00007ffcd032f9e0 R11: 0000000000000206 R12: 00007ffcd0330dfc
[ 72.891485] R13: 000055e7e8854160 R14: 0000000000000000 R15: 000055e7e8854160
[ 72.891495] </TASK>
[ 72.922754] sched_ext: BPF scheduler "create_dsq" enabled
[ 72.940151] sched_ext: BPF scheduler "create_dsq" disabled (unregistered from user space)
That's the only issue that I found, other than that, everything else looks
good to me.
Thanks,
-Andrea
Powered by blists - more mailing lists