lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250911095845.GC1386988@noisy.programming.kicks-ass.net>
Date: Thu, 11 Sep 2025 11:58:45 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Andrea Righi <arighi@...dia.com>
Cc: tj@...nel.org, linux-kernel@...r.kernel.org, mingo@...hat.com,
	juri.lelli@...hat.com, vincent.guittot@...aro.org,
	dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
	mgorman@...e.de, vschneid@...hat.com, longman@...hat.com,
	hannes@...xchg.org, mkoutny@...e.com, void@...ifault.com,
	changwoo@...lia.com, cgroups@...r.kernel.org,
	sched-ext@...ts.linux.dev, liuwenfang@...or.com, tglx@...utronix.de
Subject: Re: [PATCH 00/14] sched: Support shared runqueue locking

On Wed, Sep 10, 2025 at 08:35:55PM +0200, Peter Zijlstra wrote:

> I'll go untangle it, but probably something for tomorrow, I'm bound to
> make a mess of it now :-)

Best I could come up with is something like this. I tried a few other
approaches, but they all turned into a bigger mess.

Let me go try and run this.

---
 kernel/sched/core.c |   10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2481,11 +2481,11 @@ static inline bool is_cpu_allowed(struct
  * Returns (locked) new rq. Old rq's lock is released.
  */
 static struct rq *move_queued_task(struct rq *rq, struct rq_flags *rf,
-				   struct task_struct *p, int new_cpu)
+				   struct task_struct *p, int new_cpu, int flags)
 {
 	lockdep_assert_rq_held(rq);
 
-	deactivate_task(rq, p, DEQUEUE_NOCLOCK);
+	deactivate_task(rq, p, flags | DEQUEUE_NOCLOCK);
 	set_task_cpu(p, new_cpu);
 	rq_unlock(rq, rf);
 
@@ -2493,7 +2493,7 @@ static struct rq *move_queued_task(struc
 
 	rq_lock(rq, rf);
 	WARN_ON_ONCE(task_cpu(p) != new_cpu);
-	activate_task(rq, p, 0);
+	activate_task(rq, p, flags);
 	wakeup_preempt(rq, p, 0);
 
 	return rq;
@@ -2533,7 +2533,7 @@ static struct rq *__migrate_task(struct
 	if (!is_cpu_allowed(p, dest_cpu))
 		return rq;
 
-	rq = move_queued_task(rq, rf, p, dest_cpu);
+	rq = move_queued_task(rq, rf, p, dest_cpu, 0);
 
 	return rq;
 }
@@ -3007,7 +3007,7 @@ static int affine_move_task(struct rq *r
 
 		if (!is_migration_disabled(p)) {
 			if (task_on_rq_queued(p))
-				rq = move_queued_task(rq, rf, p, dest_cpu);
+				rq = move_queued_task(rq, rf, p, dest_cpu, DEQUEUE_LOCKED);
 
 			if (!pending->stop_pending) {
 				p->migration_pending = NULL;

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ