[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <176916639187.510.1647095982279283388.tip-bot2@tip-bot2>
Date: Fri, 23 Jan 2026 11:06:31 -0000
From: "tip-bot2 for Vincent Guittot" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Vincent Guittot <vincent.guittot@...aro.org>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: sched/urgent] sched/fair: Revert force wakeup preemption
The following commit has been merged into the sched/urgent branch of tip:
Commit-ID: 15257cc2f905dbf5813c0bfdd3c15885f28093c4
Gitweb: https://git.kernel.org/tip/15257cc2f905dbf5813c0bfdd3c15885f28093c4
Author: Vincent Guittot <vincent.guittot@...aro.org>
AuthorDate: Fri, 23 Jan 2026 11:28:58 +01:00
Committer: Peter Zijlstra <peterz@...radead.org>
CommitterDate: Fri, 23 Jan 2026 11:53:20 +01:00
sched/fair: Revert force wakeup preemption
This agressively bypasses run_to_parity and slice protection with the
assumpiton that this is what waker wants but there is no garantee that
the wakee will be the next to run. It is a better choice to use
yield_to_task or WF_SYNC in such case.
This increases the number of resched and preemption because a task becomes
quickly "ineligible" when it runs; We update the task vruntime periodically
and before the task exhausted its slice or at least quantum.
Example:
2 tasks A and B wake up simultaneously with lag = 0. Both are
eligible. Task A runs 1st and wakes up task C. Scheduler updates task
A's vruntime which becomes greater than average runtime as all others
have a lag == 0 and didn't run yet. Now task A is ineligible because
it received more runtime than the other task but it has not yet
exhausted its slice nor a min quantum. We force preemption, disable
protection but Task B will run 1st not task C.
Sidenote, DELAY_ZERO increases this effect by clearing positive lag at
wake up.
Fixes: e837456fdca8 ("sched/fair: Reimplement NEXT_BUDDY to align with EEVDF goals")
Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Link: https://patch.msgid.link/20260123102858.52428-1-vincent.guittot@linaro.org
---
kernel/sched/fair.c | 10 ----------
1 file changed, 10 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index a148c61..3eaeced 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8828,16 +8828,6 @@ static void check_preempt_wakeup_fair(struct rq *rq, struct task_struct *p, int
if ((wake_flags & WF_FORK) || pse->sched_delayed)
return;
- /*
- * If @p potentially is completing work required by current then
- * consider preemption.
- *
- * Reschedule if waker is no longer eligible. */
- if (in_task() && !entity_eligible(cfs_rq, se)) {
- preempt_action = PREEMPT_WAKEUP_RESCHED;
- goto preempt;
- }
-
/* Prefer picking wakee soon if appropriate. */
if (sched_feat(NEXT_BUDDY) &&
set_preempt_buddy(cfs_rq, wake_flags, pse, se)) {
Powered by blists - more mailing lists