[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <958a085d-95bf-490f-9987-b269f80635b5@linux.alibaba.com>
Date: Mon, 17 Feb 2025 10:53:17 +0800
From: Tianchen Ding <dtcccc@...ux.alibaba.com>
To: K Prateek Nayak <kprateek.nayak@....com>
Cc: Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>,
Mel Gorman <mgorman@...e.de>, Valentin Schneider <vschneid@...hat.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3] sched/eevdf: Force propagating min_slice of cfs_rq
when {en,de}queue tasks
Hi. Sorry for replying late due to weekend.
On 2/14/25 11:42 PM, K Prateek Nayak wrote:
[...]
>
> Should we check if old slice matches with the new slice before
> propagation to avoid any unnecessary propagate call? Something like:
>
> if (se->slice != slice) {
> se->slice = slice;
> if (se != cfs_rq->curr)
> min_vruntime_cb_propagate(&se->run_node, NULL);
> }
>
> Thoughts?
>
This optimization makes sense to me. But the code would be a bit ugly :-/
Maybe we should wrap it in a helper. Something like:
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1e78caa21436..ccceb67004a4 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -844,6 +844,16 @@ static inline bool min_vruntime_update(struct sched_entity *se, bool exit)
RB_DECLARE_CALLBACKS(static, min_vruntime_cb, struct sched_entity,
run_node, min_vruntime, min_vruntime_update);
+static inline void propagate_slice(struct cfs_rq *cfs_rq, struct sched_entity *se, u64 slice)
+{
+ if (se->slice == slice)
+ return;
+
+ se->slice = slice;
+ if (se != cfs_rq->curr)
+ min_vruntime_cb_propagate(&se->run_node, NULL);
+}
+
/*
* Enqueue an entity into the rb-tree:
*/
@@ -6969,7 +6979,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
se_update_runnable(se);
update_cfs_group(se);
- se->slice = slice;
+ propagate_slice(cfs_rq, se, slice);
slice = cfs_rq_min_slice(cfs_rq);
cfs_rq->h_nr_runnable += h_nr_runnable;
@@ -7098,7 +7108,7 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags)
se_update_runnable(se);
update_cfs_group(se);
- se->slice = slice;
+ propagate_slice(cfs_rq, se, slice);
slice = cfs_rq_min_slice(cfs_rq);
cfs_rq->h_nr_runnable -= h_nr_runnable;
--
Since the patch has been accepted, I'm not sure whether I should send a
next version. The current version does introduce an extra function call
when se->slice == slice, but the loop will run only once and exit because
RBCOMPUTE() will return true. So maybe the cost is insignificant?
Powered by blists - more mailing lists