[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240405110010.024746026@infradead.org>
Date: Fri, 05 Apr 2024 12:27:57 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: mingo@...hat.com,
peterz@...radead.org,
juri.lelli@...hat.com,
vincent.guittot@...aro.org,
dietmar.eggemann@....com,
rostedt@...dmis.org,
bsegall@...gle.com,
mgorman@...e.de,
bristot@...hat.com,
vschneid@...hat.com,
linux-kernel@...r.kernel.org
Cc: kprateek.nayak@....com,
wuyun.abel@...edance.com,
tglx@...utronix.de,
efault@....de
Subject: [RFC][PATCH 03/10] sched/fair: Cleanup pick_task_fair() vs throttle
Per 54d27365cae8 ("sched/fair: Prevent throttling in early
pick_next_task_fair()") the reason check_cfs_rq_runtime() is under the
'if (curr)' check is to ensure the (downward) traversal does not
result in an empty cfs_rq.
But then the pick_task_fair() 'copy' of all this made it restart the
traversal anyway, so that seems to solve the issue too.
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
---
kernel/sched/fair.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8435,11 +8435,11 @@ static struct task_struct *pick_task_fai
update_curr(cfs_rq);
else
curr = NULL;
-
- if (unlikely(check_cfs_rq_runtime(cfs_rq)))
- goto again;
}
+ if (unlikely(check_cfs_rq_runtime(cfs_rq)))
+ goto again;
+
se = pick_next_entity(cfs_rq);
cfs_rq = group_cfs_rq(se);
} while (cfs_rq);
Powered by blists - more mailing lists