[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aAPemAFUsJaF_C2X@slm.duckdns.org>
Date: Sat, 19 Apr 2025 07:34:16 -1000
From: Tejun Heo <tj@...nel.org>
To: Andrea Righi <arighi@...dia.com>
Cc: David Vernet <void@...ifault.com>, Changwoo Min <changwoo@...lia.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] sched_ext: Track currently locked rq
Hello, Andrea.
On Sat, Apr 19, 2025 at 02:24:30PM +0200, Andrea Righi wrote:
> @@ -149,6 +149,7 @@ struct sched_ext_entity {
> s32 selected_cpu;
> u32 kf_mask; /* see scx_kf_mask above */
> struct task_struct *kf_tasks[2]; /* see SCX_CALL_OP_TASK() */
> + struct rq *locked_rq; /* currently locked rq */
Can this be a percpu variable? While rq is locked, current can't switch out
anyway and that way we don't have to increase the size of task. Note that
kf_tasks[] are different in that some ops may, at least theoretically,
sleep.
> +static inline void update_locked_rq(struct rq *rq)
> +{
> + /*
> + * Check whether @rq is actually locked. This can help expose bugs
> + * or incorrect assumptions about the context in which a kfunc or
> + * callback is executed.
> + */
> + if (rq)
> + lockdep_assert_rq_held(rq);
> + current->scx.locked_rq = rq;
> + barrier();
As these conditions are program-order checks on the local CPU, I don't think
any barrier is necessary.
Thanks.
--
tejun
Powered by blists - more mailing lists