lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANCG0Gcm92LNtei5yLym-5dK96gb5GF2-tDoLJ+YS0fMx8jADg@mail.gmail.com>
Date: Thu, 13 Mar 2025 02:21:53 -0500
From: Aaron Lu <ziqianlu@...edance.com>
To: Valentin Schneider <vschneid@...hat.com>, Ben Segall <bsegall@...gle.com>, 
	K Prateek Nayak <kprateek.nayak@....com>, Peter Zijlstra <peterz@...radead.org>, 
	Josh Don <joshdon@...gle.com>, Ingo Molnar <mingo@...hat.com>, 
	Vincent Guittot <vincent.guittot@...aro.org>
Cc: linux-kernel@...r.kernel.org, Juri Lelli <juri.lelli@...hat.com>, 
	Dietmar Eggemann <dietmar.eggemann@....com>, Steven Rostedt <rostedt@...dmis.org>, 
	Mel Gorman <mgorman@...e.de>, Chengming Zhou <chengming.zhou@...ux.dev>, 
	Chuyi Zhou <zhouchuyi@...edance.com>
Subject: [RFC PATCH 5/7] sched/fair: Take care of group/affinity/sched_class
 change for throttled task

On task group change, for a queued task, core will dequeue it and then
requeued it. The throttled task is still considered as queued by core
because p->on_rq is still set so core will dequeue it too, but since
the task is already dequeued on throttle, handle this case properly in
fair class code.

Affinity and sched class change is similar.

Signed-off-by: Aaron Lu <ziqianlu@...edance.com>
---
 kernel/sched/fair.c | 16 +++++++++++-----
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 9e036f18d73e6..f26d53ac143fe 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5876,8 +5876,8 @@ static void throttle_cfs_rq_work(struct
callback_head *work)

 	update_rq_clock(rq);
 	WARN_ON_ONCE(!list_empty(&p->throttle_node));
-	list_add(&p->throttle_node, &cfs_rq->throttled_limbo_list);
 	dequeue_task_fair(rq, p, DEQUEUE_SLEEP | DEQUEUE_SPECIAL);
+	list_add(&p->throttle_node, &cfs_rq->throttled_limbo_list);
 	resched_curr(rq);

 out_unlock:
@@ -5920,10 +5920,6 @@ static int tg_unthrottle_up(struct task_group
*tg, void *data)
 	/* Re-enqueue the tasks that have been throttled at this level. */
 	list_for_each_entry_safe(p, tmp, &cfs_rq->throttled_limbo_list,
throttle_node) {
 		list_del_init(&p->throttle_node);
-		/*
-		 * FIXME: p may not be allowed to run on this rq anymore
-		 * due to affinity change while p is throttled.
-		 */
 		enqueue_task_fair(rq_of(cfs_rq), p, ENQUEUE_WAKEUP);
 	}

@@ -7194,6 +7190,16 @@ static int dequeue_entities(struct rq *rq,
struct sched_entity *se, int flags)
  */
 static bool dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 {
+	if (task_is_throttled(p)) {
+		/* sched/core wants to dequeue this throttled task. */
+		SCHED_WARN_ON(p->se.on_rq);
+		SCHED_WARN_ON(flags & DEQUEUE_SLEEP);
+
+		list_del_init(&p->throttle_node);
+
+		return true;
+	}
+
 	if (!(p->se.sched_delayed && (task_on_rq_migrating(p) || (flags &
DEQUEUE_SAVE))))
 		util_est_dequeue(&rq->cfs, p);

-- 
2.39.5

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ