lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aNVia1u-GVByUJtC@slm.duckdns.org>
Date: Thu, 25 Sep 2025 05:40:27 -1000
From: Tejun Heo <tj@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel@...r.kernel.org, mingo@...hat.com, juri.lelli@...hat.com,
	vincent.guittot@...aro.org, dietmar.eggemann@....com,
	rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
	vschneid@...hat.com, longman@...hat.com, hannes@...xchg.org,
	mkoutny@...e.com, void@...ifault.com, arighi@...dia.com,
	changwoo@...lia.com, cgroups@...r.kernel.org,
	sched-ext@...ts.linux.dev, liuwenfang@...or.com, tglx@...utronix.de
Subject: Re: [PATCH 13/14] sched: Add {DE,EN}QUEUE_LOCKED

Hello,

On Thu, Sep 25, 2025 at 03:10:25PM +0200, Peter Zijlstra wrote:
...
> Well, either this or scx_tasks iterator will result in lock ops for
> every task, this is unavoidable if we want the normal p->pi_lock,
> rq->lock (dsq->lock) taken for every sched_change caller.
> 
> I have the below which I would like to include in the series such that I
> can clean up all that DEQUEUE_LOCKED stuff a bit, this being the only
> sched_change that's 'weird'.
> 
> Added 'bonus' is of course one less user of the runnable_list.
> 
> (also, I have to note, for_each_cpu with preemption disabled is asking
> for trouble, the enormous core count machines are no longer super
> esoteric)

Oh yeah, we can break up every N CPUs. There's no cross-CPU atomicity
requirement.

> +	/*
> +	 * XXX online_mask is stable due to !preempt (per bypass_lock)
> +	 * so could this be for_each_online_cpu() ?
>  	 */

CPUs can go on and offline while CPUs are being bypassed. We can handle that
in hotplug ops but I'm not sure the complexity is justified in this case.

>  	for_each_possible_cpu(cpu) {
>  		struct rq *rq = cpu_rq(cpu);
> -		struct task_struct *p, *n;
>  
>  		raw_spin_rq_lock(rq);
> -
>  		if (bypass) {
>  			WARN_ON_ONCE(rq->scx.flags & SCX_RQ_BYPASSING);
>  			rq->scx.flags |= SCX_RQ_BYPASSING;
> @@ -4866,36 +4867,33 @@ static void scx_bypass(bool bypass)
>  			WARN_ON_ONCE(!(rq->scx.flags & SCX_RQ_BYPASSING));
>  			rq->scx.flags &= ~SCX_RQ_BYPASSING;

I may be using BYPASSING being set as all tasks having been cycled. Will
check. We may need an extra state to note that bypass switching is complete.
Hmm... the switching is not synchronized against scheduling operations
anymore - ie. we can end up mixing regular op and bypassed operation for the
same scheduling event (e.g. enqueue vs. task state transitions), which can
lead subtle state inconsistencies on the BPF scheduler side. Either the
bypassing state should become per-task, which likely has system
recoverability issues under lock storm conditions, or maybe we can just
shift it to the scheduling path - e.g. decide whether to bypass or not at
the beginning of enqueue path and then stick to it until the whole operation
is finished.

>  		}
> +		raw_spin_rq_unlock(rq);
> +	}
> +
> +	/* implicit RCU section due to bypass_lock */
> +	for_each_process_thread(g, p) {

I don't think this is safe. p->tasks is unlinked from __unhash_process() but
tasks can schedule between being unhashed and the final preemt_disable() in
do_exit() and thus the above iteration can miss tasks which may currently be
runnable.

> +		unsigned int state;
>  
> +		guard(raw_spinlock)(&p->pi_lock);
> +		if (p->flags & PF_EXITING || p->sched_class != &ext_sched_class)
> +			continue;
> +
> +		state = READ_ONCE(p->__state);
> +		if (state != TASK_RUNNING && state != TASK_WAKING)
>  			continue;
>  
> +		guard(__task_rq_lock)(p);
> +		scoped_guard (sched_change, p, DEQUEUE_SAVE | DEQUEUE_MOVE) {
> +			/* nothing */ ;
>  		}
> +	}

This is significantly more expensive. On large systems, the number of
threads can easily reach six digits. Iterating all of them while doing
locking ops on each of them might become problematic depending on what the
rest of the system is doing (unfortunately, it's not too difficult to cause
meltdowns on some NUMA systems with cross-node traffic). I don't think
p->tasks iterations can be broken up either.

I'm sure there's a solution for all these. Maybe once bypass is set and the
per-task iteration can be broken up, this is no longer a problem, ok maybe
there's some other way to maintain runnable list in a way that's decoupled
from rq lock. The interlocking requirement is relaxed on the removal side.
There must be a way to visit all runnable tasks but visiting some tasks
spuriously is not a problem, so there's some leeway too.

As with everything, this part is a bit tricky and will need non-trivial
amount of testing to verify that it can recover the system from BPF
scheduler induced death sprials (e.g. migrating tasks too frequently across
NUMA boundaries on some systems). The change guard cleanups make sense
regardless of how the rest develops. Would it make sense to land them first?
Once we know what to do with the core scheduling locking, I'm sure we can
find a way to make this work accordingly.

Thanks.

-- 
tejun

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ