[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51b603d3-72bf-91c2-1559-b3f85b4865ba@huawei.com>
Date: Sat, 28 Jan 2023 18:41:33 +0800
From: Zhang Qiao <zhangqiao22@...wei.com>
To: Roman Kagan <rkagan@...zon.de>, <linux-kernel@...r.kernel.org>
CC: Daniel Bristot de Oliveira <bristot@...hat.com>,
Ben Segall <bsegall@...gle.com>,
Ingo Molnar <mingo@...hat.com>,
Waiman Long <longman@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Peter Zijlstra <peterz@...radead.org>,
Valentin Schneider <vschneid@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Mel Gorman <mgorman@...e.de>,
Juri Lelli <juri.lelli@...hat.com>
Subject: Re: [PATCH] sched/fair: sanitize vruntime of entity being placed
在 2023/1/28 0:32, Roman Kagan 写道:
> From: Zhang Qiao <zhangqiao22@...wei.com>
>
> When a scheduling entity is placed onto cfs_rq, its vruntime is pulled
> to the base level (around cfs_rq->min_vruntime), so that the entity
> doesn't gain extra boost when placed backwards.
>
> However, if the entity being placed wasn't executed for a long time, its
> vruntime may get too far behind (e.g. while cfs_rq was executing a
> low-weight hog), which can inverse the vruntime comparison due to s64
> overflow. This results in the entity being placed with its original
> vruntime way forwards, so that it will effectively never get to the cpu.
>
> To prevent that, ignore the vruntime of the entity being placed if it
> didn't execute for much longer than the characteristic sheduler time
> scale.
>
Signed-off-by: Zhang Qiao <zhangqiao22@...wei.com>
> [rkagan: formatted, adjusted commit log, comments, cutoff value]
> Co-developed-by: Roman Kagan <rkagan@...zon.de>
> Signed-off-by: Roman Kagan <rkagan@...zon.de>
> ---
> @zhangqiao22, I took the liberty to put you as the author of the patch,
> as this is essentially what you posted for discussion, with minor
> tweaks. Please stamp with your s-o-b if you're ok with it.
>
> kernel/sched/fair.c | 15 +++++++++++++--
> 1 file changed, 13 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 0f8736991427..d6cf131ebb0b 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4656,6 +4656,7 @@ static void
> place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial)
> {
> u64 vruntime = cfs_rq->min_vruntime;
> + u64 sleep_time;
>
> /*
> * The 'current' period is already promised to the current tasks,
> @@ -4685,8 +4686,18 @@ place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial)
> vruntime -= thresh;
> }
>
> - /* ensure we never gain time by being placed backwards. */
> - se->vruntime = max_vruntime(se->vruntime, vruntime);
> + /*
> + * Pull vruntime of the entity being placed to the base level of
> + * cfs_rq, to prevent boosting it if placed backwards. If the entity
> + * slept for a long time, don't even try to compare its vruntime with
> + * the base as it may be too far off and the comparison may get
> + * inversed due to s64 overflow.
> + */
> + sleep_time = rq_clock_task(rq_of(cfs_rq)) - se->exec_start;
> + if ((s64)sleep_time > 60 * NSEC_PER_SEC)
In order to avoid overflowing, it'd better be "60LL * NSEC_PER_SEC"
Thanks,
Qiao.
> + se->vruntime = vruntime;
> + else
> + se->vruntime = max_vruntime(se->vruntime, vruntime);
> }
>
> static void check_enqueue_throttle(struct cfs_rq *cfs_rq);
>
Powered by blists - more mailing lists