[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20221115171851.835-10-vincent.guittot@linaro.org>
Date: Tue, 15 Nov 2022 18:18:51 +0100
From: Vincent Guittot <vincent.guittot@...aro.org>
To: mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com,
dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
mgorman@...e.de, bristot@...hat.com, vschneid@...hat.com,
linux-kernel@...r.kernel.org, parth@...ux.ibm.com
Cc: qyousef@...alina.io, chris.hyser@...cle.com,
patrick.bellasi@...bug.net, David.Laight@...lab.com,
pjt@...gle.com, pavel@....cz, tj@...nel.org, qperret@...gle.com,
tim.c.chen@...ux.intel.com, joshdon@...gle.com, timj@....org,
kprateek.nayak@....com, yu.c.chen@...el.com,
youssefesmat@...omium.org, joel@...lfernandes.org,
Vincent Guittot <vincent.guittot@...aro.org>
Subject: [PATCH v9 9/9] sched/fair: remove check_preempt_from_others
With the dedicated latency list, we don't have to take care of this special
case anymore as pick_next_entity checks for a runnable latency sensitive
task.
Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org>
---
kernel/sched/fair.c | 34 ++--------------------------------
1 file changed, 2 insertions(+), 32 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 76da7c7a13ab..466a2fee1592 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6104,35 +6104,6 @@ static int sched_idle_cpu(int cpu)
}
#endif
-static void set_next_buddy(struct sched_entity *se);
-
-static void check_preempt_from_others(struct cfs_rq *cfs, struct sched_entity *se)
-{
- struct sched_entity *next;
-
- if (se->latency_offset >= 0)
- return;
-
- if (cfs->nr_running <= 1)
- return;
- /*
- * When waking from another class, we don't need to check to preempt at
- * wakeup and don't set next buddy as a candidate for being picked in
- * priority.
- * In case of simultaneous wakeup when current is another class, the
- * latency sensitive tasks lost opportunity to preempt non sensitive
- * tasks which woke up simultaneously.
- */
-
- if (cfs->next)
- next = cfs->next;
- else
- next = __pick_first_entity(cfs);
-
- if (next && wakeup_preempt_entity(next, se) == 1)
- set_next_buddy(se);
-}
-
/*
* The enqueue_task method is called before nr_running is
* increased. Here we update the fair scheduling stats and
@@ -6219,15 +6190,14 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
if (!task_new)
update_overutilized_status(rq);
- if (rq->curr->sched_class != &fair_sched_class)
- check_preempt_from_others(cfs_rq_of(&p->se), &p->se);
-
enqueue_throttle:
assert_list_leaf_cfs_rq(rq);
hrtick_update(rq);
}
+static void set_next_buddy(struct sched_entity *se);
+
/*
* The dequeue_task method is called before nr_running is
* decreased. We remove the task from the rbtree and
--
2.17.1
Powered by blists - more mailing lists