[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <610e209d-5c12-44d5-898c-f18dffbc2062@amd.com>
Date: Fri, 14 Feb 2025 21:12:53 +0530
From: K Prateek Nayak <kprateek.nayak@....com>
To: Tianchen Ding <dtcccc@...ux.alibaba.com>, Peter Zijlstra
<peterz@...radead.org>
CC: Ingo Molnar <mingo@...hat.com>, Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>, Dietmar Eggemann
<dietmar.eggemann@....com>, Steven Rostedt <rostedt@...dmis.org>, Ben Segall
<bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>, Valentin Schneider
<vschneid@...hat.com>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v3] sched/eevdf: Force propagating min_slice of cfs_rq
when {en,de}queue tasks
Hello Tianchen,
On 2/11/2025 12:06 PM, Tianchen Ding wrote:
> When a task is enqueued and its parent cgroup se is already on_rq, this
> parent cgroup se will not be enqueued again, and hence the root->min_slice
> leaves unchanged. The same issue happens when a task is dequeued and its
> parent cgroup se has other runnable entities, and the parent cgroup se
> will not be dequeued.
>
> Force propagating min_slice when se doesn't need to be enqueued or
> dequeued. Ensure the se hierarchy always get the latest min_slice.
>
> Fixes: aef6987d8954 ("sched/eevdf: Propagate min_slice up the cgroup hierarchy")
> Signed-off-by: Tianchen Ding <dtcccc@...ux.alibaba.com>
> ---
> v3:
> I modified some descriptions in commit log, and rebased to the latest
> tip branch. The old version of patch can be found in [1].
>
> The original patchset wants to add a feature. As the 2nd patch may be
> hard to be accepted, I think at least the bugfix should be applied.
>
> The issue about this patch was described detailly in [2].
>
> [1]https://lore.kernel.org/all/20241031094822.30531-1-dtcccc@linux.alibaba.com/
> [2]https://lore.kernel.org/all/a903d0dc-1d88-4ae7-ac81-3eed0445654d@linux.alibaba.com/
> ---
> kernel/sched/fair.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 1e78caa21436..0d479b92633a 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6970,6 +6970,8 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
> update_cfs_group(se);
>
> se->slice = slice;
> + if (se != cfs_rq->curr)
> + min_vruntime_cb_propagate(&se->run_node, NULL);
Should we check if old slice matches with the new slice before
propagation to avoid any unnecessary propagate call? Something like:
if (se->slice != slice) {
se->slice = slice;
if (se != cfs_rq->curr)
min_vruntime_cb_propagate(&se->run_node, NULL);
}
Thoughts?
Other than that, the fix looks good. Feel free to add:
Reviewed-and-tested-by: K Prateek Nayak <kprateek.nayak@....com>
--
Thanks and Regards,
Prateek
> slice = cfs_rq_min_slice(cfs_rq);
>
> cfs_rq->h_nr_runnable += h_nr_runnable;
> @@ -7099,6 +7101,8 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags)
> update_cfs_group(se);
>
> se->slice = slice;
> + if (se != cfs_rq->curr)
> + min_vruntime_cb_propagate(&se->run_node, NULL);
> slice = cfs_rq_min_slice(cfs_rq);
>
> cfs_rq->h_nr_runnable -= h_nr_runnable;
Powered by blists - more mailing lists