[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87sgekorfq.derkling@matbug.net>
Date: Wed, 24 Jun 2020 09:26:17 +0200
From: Patrick Bellasi <patrick.bellasi@...bug.net>
To: Qais Yousef <qais.yousef@....com>
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Chris Redpath <chris.redpath@....com>,
Lukasz Luba <lukasz.luba@....com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/2] sched/uclamp: Fix initialization of strut uclamp_rq
Hi Qais,
On Fri, Jun 19, 2020 at 19:20:10 +0200, Qais Yousef <qais.yousef@....com> wrote...
> struct uclamp_rq was zeroed out entirely in assumption that in the first
> call to uclamp_rq_inc() they'd be initialized correctly in accordance to
> default settings.
>
> But when next patch introduces a static key to skip
> uclamp_rq_{inc,dec}() until userspace opts in to use uclamp, schedutil
> will fail to perform any frequency changes because the
> rq->uclamp[UCLAMP_MAX].value is zeroed at init and stays as such. Which
> means all rqs are capped to 0 by default.
Does not this means the problem is more likely with uclamp_rq_util_with(),
which should be guarded?
Otherwise, we will also keep doing useless min/max aggregations each
time schedutil calls that function, thus not completely removing
uclamp overheads while user-space has not opted in.
What about dropping this and add the guard in the following patch, along
with the others?
> Fix it by making sure we do proper initialization at init without
>
> Fix it by making sure we do proper initialization at init without
> relying on uclamp_rq_inc() doing it later.
>
> Fixes: 69842cba9ace ("sched/uclamp: Add CPU's clamp buckets refcounting")
> Signed-off-by: Qais Yousef <qais.yousef@....com>
> Cc: Juri Lelli <juri.lelli@...hat.com>
> Cc: Vincent Guittot <vincent.guittot@...aro.org>
> Cc: Dietmar Eggemann <dietmar.eggemann@....com>
> Cc: Steven Rostedt <rostedt@...dmis.org>
> Cc: Ben Segall <bsegall@...gle.com>
> Cc: Mel Gorman <mgorman@...e.de>
> CC: Patrick Bellasi <patrick.bellasi@...bug.net>
> Cc: Chris Redpath <chris.redpath@....com>
> Cc: Lukasz Luba <lukasz.luba@....com>
> Cc: linux-kernel@...r.kernel.org
> ---
> kernel/sched/core.c | 23 ++++++++++++++++++-----
> 1 file changed, 18 insertions(+), 5 deletions(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index a43c84c27c6f..4265861e13e9 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -1248,6 +1248,22 @@ static void uclamp_fork(struct task_struct *p)
> }
> }
>
> +static void __init init_uclamp_rq(struct rq *rq)
> +{
> + enum uclamp_id clamp_id;
> + struct uclamp_rq *uc_rq = rq->uclamp;
> +
> + for_each_clamp_id(clamp_id) {
> + memset(uc_rq[clamp_id].bucket,
> + 0,
> + sizeof(struct uclamp_bucket)*UCLAMP_BUCKETS);
> +
> + uc_rq[clamp_id].value = uclamp_none(clamp_id);
> + }
> +
> + rq->uclamp_flags = 0;
> +}
> +
> static void __init init_uclamp(void)
> {
> struct uclamp_se uc_max = {};
> @@ -1256,11 +1272,8 @@ static void __init init_uclamp(void)
>
> mutex_init(&uclamp_mutex);
>
> - for_each_possible_cpu(cpu) {
> - memset(&cpu_rq(cpu)->uclamp, 0,
> - sizeof(struct uclamp_rq)*UCLAMP_CNT);
> - cpu_rq(cpu)->uclamp_flags = 0;
> - }
> + for_each_possible_cpu(cpu)
> + init_uclamp_rq(cpu_rq(cpu));
>
> for_each_clamp_id(clamp_id) {
> uclamp_se_set(&init_task.uclamp_req[clamp_id],
Powered by blists - more mailing lists