[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aAQDIIPOUAU-nB_F@gpd3>
Date: Sat, 19 Apr 2025 22:10:08 +0200
From: Andrea Righi <arighi@...dia.com>
To: Tejun Heo <tj@...nel.org>
Cc: David Vernet <void@...ifault.com>, Changwoo Min <changwoo@...lia.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] sched_ext: Track currently locked rq
On Sat, Apr 19, 2025 at 07:34:16AM -1000, Tejun Heo wrote:
> Hello, Andrea.
>
> On Sat, Apr 19, 2025 at 02:24:30PM +0200, Andrea Righi wrote:
> > @@ -149,6 +149,7 @@ struct sched_ext_entity {
> > s32 selected_cpu;
> > u32 kf_mask; /* see scx_kf_mask above */
> > struct task_struct *kf_tasks[2]; /* see SCX_CALL_OP_TASK() */
> > + struct rq *locked_rq; /* currently locked rq */
>
> Can this be a percpu variable? While rq is locked, current can't switch out
> anyway and that way we don't have to increase the size of task. Note that
> kf_tasks[] are different in that some ops may, at least theoretically,
> sleep.
Yeah, I was debating between using a percpu variable or storing it in
current. I went with current just to stay consistent with kf_tasks.
But you're right about not to increasing the size of the task, and as you
pointed out, we can’t switch if the rq is locked, so a percpu variable
should work. I’ll update that in v2.
>
> > +static inline void update_locked_rq(struct rq *rq)
> > +{
> > + /*
> > + * Check whether @rq is actually locked. This can help expose bugs
> > + * or incorrect assumptions about the context in which a kfunc or
> > + * callback is executed.
> > + */
> > + if (rq)
> > + lockdep_assert_rq_held(rq);
> > + current->scx.locked_rq = rq;
> > + barrier();
>
> As these conditions are program-order checks on the local CPU, I don't think
> any barrier is necessary.
Right, these are local CPU access only, I'll drop the barrier.
Thanks,
-Andrea
Powered by blists - more mailing lists