lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250925131025.GA4067720@noisy.programming.kicks-ass.net>
Date: Thu, 25 Sep 2025 15:10:25 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Tejun Heo <tj@...nel.org>
Cc: linux-kernel@...r.kernel.org, mingo@...hat.com, juri.lelli@...hat.com,
	vincent.guittot@...aro.org, dietmar.eggemann@....com,
	rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
	vschneid@...hat.com, longman@...hat.com, hannes@...xchg.org,
	mkoutny@...e.com, void@...ifault.com, arighi@...dia.com,
	changwoo@...lia.com, cgroups@...r.kernel.org,
	sched-ext@...ts.linux.dev, liuwenfang@...or.com, tglx@...utronix.de
Subject: Re: [PATCH 13/14] sched: Add {DE,EN}QUEUE_LOCKED

On Fri, Sep 12, 2025 at 06:32:32AM -1000, Tejun Heo wrote:
> Hello,
> 
> On Fri, Sep 12, 2025 at 04:19:04PM +0200, Peter Zijlstra wrote:
> ...
> > Ah, but I think we *have* to change it :/ The thing is that with the new
> > pick you can change 'rq' without holding the source rq->lock. So we
> > can't maintain this list.
> > 
> > Could something like so work?
> > 
> > 	scoped_guard (rcu) for_each_process_thread(g, p) {
> > 		if (p->flags & PF_EXITING || p->sched_class != ext_sched_class)
> > 			continue;
> > 
> > 		guard(task_rq_lock)(p);
> > 		scoped_guard (sched_change, p) {
> > 			/* no-op */
> > 		}
> > 	}	
> 
> Yeah, or I can make scx_tasks iteration smarter so that it can skip through
> the list for tasks which aren't runnable. As long as it doesn't do lock ops
> on every task, it should be fine. I think this is solvable one way or
> another. Let's continue in the other subthread.

Well, either this or scx_tasks iterator will result in lock ops for
every task, this is unavoidable if we want the normal p->pi_lock,
rq->lock (dsq->lock) taken for every sched_change caller.

I have the below which I would like to include in the series such that I
can clean up all that DEQUEUE_LOCKED stuff a bit, this being the only
sched_change that's 'weird'.

Added 'bonus' is of course one less user of the runnable_list.

(also, I have to note, for_each_cpu with preemption disabled is asking
for trouble, the enormous core count machines are no longer super
esoteric)

--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -4817,6 +4817,7 @@ static void scx_bypass(bool bypass)
 {
 	static DEFINE_RAW_SPINLOCK(bypass_lock);
 	static unsigned long bypass_timestamp;
+	struct task_struct *g, *p;
 	struct scx_sched *sch;
 	unsigned long flags;
 	int cpu;
@@ -4849,16 +4850,16 @@ static void scx_bypass(bool bypass)
 	 * queued tasks are re-queued according to the new scx_rq_bypassing()
 	 * state. As an optimization, walk each rq's runnable_list instead of
 	 * the scx_tasks list.
-	 *
-	 * This function can't trust the scheduler and thus can't use
-	 * cpus_read_lock(). Walk all possible CPUs instead of online.
+	 */
+
+	/*
+	 * XXX online_mask is stable due to !preempt (per bypass_lock)
+	 * so could this be for_each_online_cpu() ?
 	 */
 	for_each_possible_cpu(cpu) {
 		struct rq *rq = cpu_rq(cpu);
-		struct task_struct *p, *n;
 
 		raw_spin_rq_lock(rq);
-
 		if (bypass) {
 			WARN_ON_ONCE(rq->scx.flags & SCX_RQ_BYPASSING);
 			rq->scx.flags |= SCX_RQ_BYPASSING;
@@ -4866,36 +4867,33 @@ static void scx_bypass(bool bypass)
 			WARN_ON_ONCE(!(rq->scx.flags & SCX_RQ_BYPASSING));
 			rq->scx.flags &= ~SCX_RQ_BYPASSING;
 		}
+		raw_spin_rq_unlock(rq);
+	}
+
+	/* implicit RCU section due to bypass_lock */
+	for_each_process_thread(g, p) {
+		unsigned int state;
 
-		/*
-		 * We need to guarantee that no tasks are on the BPF scheduler
-		 * while bypassing. Either we see enabled or the enable path
-		 * sees scx_rq_bypassing() before moving tasks to SCX.
-		 */
-		if (!scx_enabled()) {
-			raw_spin_rq_unlock(rq);
+		guard(raw_spinlock)(&p->pi_lock);
+		if (p->flags & PF_EXITING || p->sched_class != &ext_sched_class)
+			continue;
+
+		state = READ_ONCE(p->__state);
+		if (state != TASK_RUNNING && state != TASK_WAKING)
 			continue;
-		}
 
-		/*
-		 * The use of list_for_each_entry_safe_reverse() is required
-		 * because each task is going to be removed from and added back
-		 * to the runnable_list during iteration. Because they're added
-		 * to the tail of the list, safe reverse iteration can still
-		 * visit all nodes.
-		 */
-		list_for_each_entry_safe_reverse(p, n, &rq->scx.runnable_list,
-						 scx.runnable_node) {
-			/* cycling deq/enq is enough, see the function comment */
-			scoped_guard (sched_change, p, DEQUEUE_SAVE | DEQUEUE_MOVE) {
-				/* nothing */ ;
-			}
+		guard(__task_rq_lock)(p);
+		scoped_guard (sched_change, p, DEQUEUE_SAVE | DEQUEUE_MOVE) {
+			/* nothing */ ;
 		}
+	}
 
-		/* resched to restore ticks and idle state */
-		if (cpu_online(cpu) || cpu == smp_processor_id())
-			resched_curr(rq);
+	/* implicit !preempt section due to bypass_lock */
+	for_each_online_cpu(cpu) {
+		struct rq *rq = cpu_rq(cpu);
 
+		raw_spin_rq_lock(rq);
+		resched_curr(cpu_rq(cpu));
 		raw_spin_rq_unlock(rq);
 	}
 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ