lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250702121159.535226098@infradead.org>
Date: Wed, 02 Jul 2025 13:49:35 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: mingo@...hat.com,
 juri.lelli@...hat.com,
 vincent.guittot@...aro.org,
 dietmar.eggemann@....com,
 rostedt@...dmis.org,
 bsegall@...gle.com,
 mgorman@...e.de,
 vschneid@...hat.com,
 clm@...a.com
Cc: linux-kernel@...r.kernel.org,
 peterz@...radead.org
Subject: [PATCH v2 11/12] sched: Change ttwu_runnable() vs sched_delayed

Change how TTWU handles sched_delayed tasks.

Currently sched_delayed tasks are seen as on_rq and will hit
ttwu_runnable(), which treats sched_delayed tasks the same as other
on_rq tasks, it makes them runnable on the runqueue they're on.

However, tasks that were dequeued (and not delayed) will get a
different wake-up path, notably they will pass through wakeup
balancing.

Change ttwu_runnable() to dequeue delayed tasks and report it isn't
on_rq after all, ensuring the task continues down the regular wakeup
path.

Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
---
 kernel/sched/core.c |    6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3793,8 +3793,10 @@ static int ttwu_runnable(struct task_str
 		return 0;
 
 	update_rq_clock(rq);
-	if (p->se.sched_delayed)
-		enqueue_task(rq, p, ENQUEUE_NOCLOCK | ENQUEUE_DELAYED);
+	if (p->se.sched_delayed) {
+		dequeue_task(rq, p, DEQUEUE_NOCLOCK | DEQUEUE_DELAYED | DEQUEUE_SLEEP);
+		return 0;
+	}
 	if (!task_on_cpu(rq, p)) {
 		/*
 		 * When on_rq && !on_cpu the task is preempted, see if



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ