[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z-SasIwx5hINm1sf@slm.duckdns.org>
Date: Wed, 26 Mar 2025 14:24:16 -1000
From: Tejun Heo <tj@...nel.org>
To: Andrea Righi <arighi@...dia.com>
Cc: David Vernet <void@...ifault.com>, Changwoo Min <changwoo@...lia.com>,
Joel Fernandes <joelagnelf@...dia.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched_ext: Fix missing rq lock in scx_bpf_cpuperf_set()
Hello, Andrea.
On Tue, Mar 25, 2025 at 03:00:21PM +0100, Andrea Righi wrote:
> @@ -7114,12 +7114,22 @@ __bpf_kfunc void scx_bpf_cpuperf_set(s32 cpu, u32 perf)
>
> if (ops_cpu_valid(cpu, NULL)) {
> struct rq *rq = cpu_rq(cpu);
> + struct rq_flags rf;
> + bool rq_unlocked;
> +
> + preempt_disable();
> + rq_unlocked = (rq != this_rq()) || scx_kf_allowed_if_unlocked();
> + if (rq_unlocked) {
> + rq_lock_irqsave(rq, &rf);
I don't think this is correct:
- This is double-locking regardless of the locking order and thus can lead
to ABBA deadlocks.
- There's no guarantee that the locked rq is this_rq(). e.g. In wakeup path,
the locked rq is on the CPU that the wakeup is targeting, not this_rq().
Hmm... this is a bit tricky. SCX_CALL_OP*() always knows whether the rq is
locked or not. We might as well pass it the currently locked rq and remember
that in a percpu variable, so that scx_bpf_*() can always tell whether and
which cpu is rq-locked currently. If unlocked, we can grab the rq lock. If
the traget cpu is not the locked one, we can either fail the operation (and
trigger ops error) or bounce it to an irq work.
Thanks.
--
tejun
Powered by blists - more mailing lists