[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250925155323.GB4067720@noisy.programming.kicks-ass.net>
Date: Thu, 25 Sep 2025 17:53:23 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Tejun Heo <tj@...nel.org>
Cc: linux-kernel@...r.kernel.org, mingo@...hat.com, juri.lelli@...hat.com,
vincent.guittot@...aro.org, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
vschneid@...hat.com, longman@...hat.com, hannes@...xchg.org,
mkoutny@...e.com, void@...ifault.com, arighi@...dia.com,
changwoo@...lia.com, cgroups@...r.kernel.org,
sched-ext@...ts.linux.dev, liuwenfang@...or.com, tglx@...utronix.de
Subject: Re: [PATCH 13/14] sched: Add {DE,EN}QUEUE_LOCKED
On Thu, Sep 25, 2025 at 05:40:27AM -1000, Tejun Heo wrote:
> Hello,
>
> On Thu, Sep 25, 2025 at 03:10:25PM +0200, Peter Zijlstra wrote:
> ...
> > Well, either this or scx_tasks iterator will result in lock ops for
> > every task, this is unavoidable if we want the normal p->pi_lock,
> > rq->lock (dsq->lock) taken for every sched_change caller.
> >
> > I have the below which I would like to include in the series such that I
> > can clean up all that DEQUEUE_LOCKED stuff a bit, this being the only
> > sched_change that's 'weird'.
> >
> > Added 'bonus' is of course one less user of the runnable_list.
> >
> > (also, I have to note, for_each_cpu with preemption disabled is asking
> > for trouble, the enormous core count machines are no longer super
> > esoteric)
>
> Oh yeah, we can break up every N CPUs. There's no cross-CPU atomicity
> requirement.
Right.
> > + /*
> > + * XXX online_mask is stable due to !preempt (per bypass_lock)
> > + * so could this be for_each_online_cpu() ?
> > */
>
> CPUs can go on and offline while CPUs are being bypassed. We can handle that
> in hotplug ops but I'm not sure the complexity is justified in this case.
Well, not in the current code, since the CPU running this has IRQs and
preemption disabled (per bypass_lock) and thus stop_machine, as used in
hotplug can't make progress.
That is; disabling preemption serializes against hotplug. This is
something that the scheduler relies on in quite a few places.
> > for_each_possible_cpu(cpu) {
> > struct rq *rq = cpu_rq(cpu);
> > - struct task_struct *p, *n;
> >
> > raw_spin_rq_lock(rq);
> > -
> > if (bypass) {
> > WARN_ON_ONCE(rq->scx.flags & SCX_RQ_BYPASSING);
> > rq->scx.flags |= SCX_RQ_BYPASSING;
> > @@ -4866,36 +4867,33 @@ static void scx_bypass(bool bypass)
> > WARN_ON_ONCE(!(rq->scx.flags & SCX_RQ_BYPASSING));
> > rq->scx.flags &= ~SCX_RQ_BYPASSING;
>
> I may be using BYPASSING being set as all tasks having been cycled. Will
> check. We may need an extra state to note that bypass switching is complete.
> Hmm... the switching is not synchronized against scheduling operations
> anymore - ie. we can end up mixing regular op and bypassed operation for the
> same scheduling event (e.g. enqueue vs. task state transitions), which can
> lead subtle state inconsistencies on the BPF scheduler side. Either the
> bypassing state should become per-task, which likely has system
> recoverability issues under lock storm conditions, or maybe we can just
> shift it to the scheduling path - e.g. decide whether to bypass or not at
> the beginning of enqueue path and then stick to it until the whole operation
> is finished.
Makes sense.
> > }
> > + raw_spin_rq_unlock(rq);
> > + }
> > +
> > + /* implicit RCU section due to bypass_lock */
> > + for_each_process_thread(g, p) {
>
> I don't think this is safe. p->tasks is unlinked from __unhash_process() but
> tasks can schedule between being unhashed and the final preemt_disable() in
> do_exit() and thus the above iteration can miss tasks which may currently be
> runnable.
Bah, you're quite right :/
> > + unsigned int state;
> >
> > + guard(raw_spinlock)(&p->pi_lock);
> > + if (p->flags & PF_EXITING || p->sched_class != &ext_sched_class)
> > + continue;
> > +
> > + state = READ_ONCE(p->__state);
> > + if (state != TASK_RUNNING && state != TASK_WAKING)
> > continue;
> >
> > + guard(__task_rq_lock)(p);
> > + scoped_guard (sched_change, p, DEQUEUE_SAVE | DEQUEUE_MOVE) {
> > + /* nothing */ ;
> > }
> > + }
>
> This is significantly more expensive. On large systems, the number of
> threads can easily reach six digits. Iterating all of them while doing
> locking ops on each of them might become problematic depending on what the
> rest of the system is doing (unfortunately, it's not too difficult to cause
> meltdowns on some NUMA systems with cross-node traffic). I don't think
> p->tasks iterations can be broken up either.
I thought to have understood that bypass isn't something that happens
when the system is happy. As long as it completes at some point all this
should be fine right?
I mean, yeah, it'll take a while, but meh.
Also, we could run the thing at fair or FIFO-1 or something, to be
outside of ext itself. Possibly we can freeze all the ext tasks on
return to user to limit the amount of noise they generate.
> The change guard cleanups make sense
> regardless of how the rest develops. Would it make sense to land them first?
> Once we know what to do with the core scheduling locking, I'm sure we can
> find a way to make this work accordingly.
Yeah, definitely. Thing is, if we can get all sched_change users to be
the same, that all cleans up better.
But if cleaning this up gets to be too vexing, we can postpone that.
Powered by blists - more mailing lists