lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200624103618.zkk2unblc265v4mo@e107158-lin.cambridge.arm.com>
Date:   Wed, 24 Jun 2020 11:36:19 +0100
From:   Qais Yousef <qais.yousef@....com>
To:     Patrick Bellasi <patrick.bellasi@...bug.net>
Cc:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Chris Redpath <chris.redpath@....com>,
        Lukasz Luba <lukasz.luba@....com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/2] sched/uclamp: Fix initialization of strut
 uclamp_rq

On 06/24/20 09:26, Patrick Bellasi wrote:
> 
> Hi Qais,
> 
> On Fri, Jun 19, 2020 at 19:20:10 +0200, Qais Yousef <qais.yousef@....com> wrote...
> 
> > struct uclamp_rq was zeroed out entirely in assumption that in the first
> > call to uclamp_rq_inc() they'd be initialized correctly in accordance to
> > default settings.
> >
> > But when next patch introduces a static key to skip
> > uclamp_rq_{inc,dec}() until userspace opts in to use uclamp, schedutil
> > will fail to perform any frequency changes because the
> > rq->uclamp[UCLAMP_MAX].value is zeroed at init and stays as such. Which
> > means all rqs are capped to 0 by default.
> 
> Does not this means the problem is more likely with uclamp_rq_util_with(),
> which should be guarded?

The initialization is wrong and needs to be fixed, no? So I won't say
uclamp_rq_util_with() has any problem.

For RT boosting to work as-is, uclamp_rq_util_with() needs to stay the same,
otherwise we need to add extra logic to deal with that. Which I don't think is
worth it or necessary. The function is called from sugov and
find_energy_efficient_cpu(), both of which aren't a worry to make this call
unconditionally IMO.

Thanks

--
Qais Yousef

> 
> Otherwise, we will also keep doing useless min/max aggregations each
> time schedutil calls that function, thus not completely removing
> uclamp overheads while user-space has not opted in.
> 
> What about dropping this and add the guard in the following patch, along
> with the others?

> 
> > Fix it by making sure we do proper initialization at init without
> 
> >
> > Fix it by making sure we do proper initialization at init without
> > relying on uclamp_rq_inc() doing it later.
> >
> > Fixes: 69842cba9ace ("sched/uclamp: Add CPU's clamp buckets refcounting")
> > Signed-off-by: Qais Yousef <qais.yousef@....com>
> > Cc: Juri Lelli <juri.lelli@...hat.com>
> > Cc: Vincent Guittot <vincent.guittot@...aro.org>
> > Cc: Dietmar Eggemann <dietmar.eggemann@....com>
> > Cc: Steven Rostedt <rostedt@...dmis.org>
> > Cc: Ben Segall <bsegall@...gle.com>
> > Cc: Mel Gorman <mgorman@...e.de>
> > CC: Patrick Bellasi <patrick.bellasi@...bug.net>
> > Cc: Chris Redpath <chris.redpath@....com>
> > Cc: Lukasz Luba <lukasz.luba@....com>
> > Cc: linux-kernel@...r.kernel.org
> > ---
> >  kernel/sched/core.c | 23 ++++++++++++++++++-----
> >  1 file changed, 18 insertions(+), 5 deletions(-)
> >
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index a43c84c27c6f..4265861e13e9 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -1248,6 +1248,22 @@ static void uclamp_fork(struct task_struct *p)
> >  	}
> >  }
> >  
> > +static void __init init_uclamp_rq(struct rq *rq)
> > +{
> > +	enum uclamp_id clamp_id;
> > +	struct uclamp_rq *uc_rq = rq->uclamp;
> > +
> > +	for_each_clamp_id(clamp_id) {
> > +		memset(uc_rq[clamp_id].bucket,
> > +		       0,
> > +		       sizeof(struct uclamp_bucket)*UCLAMP_BUCKETS);
> > +
> > +		uc_rq[clamp_id].value = uclamp_none(clamp_id);
> > +	}
> > +
> > +	rq->uclamp_flags = 0;
> > +}
> > +
> >  static void __init init_uclamp(void)
> >  {
> >  	struct uclamp_se uc_max = {};
> > @@ -1256,11 +1272,8 @@ static void __init init_uclamp(void)
> >  
> >  	mutex_init(&uclamp_mutex);
> >  
> > -	for_each_possible_cpu(cpu) {
> > -		memset(&cpu_rq(cpu)->uclamp, 0,
> > -				sizeof(struct uclamp_rq)*UCLAMP_CNT);
> > -		cpu_rq(cpu)->uclamp_flags = 0;
> > -	}
> > +	for_each_possible_cpu(cpu)
> > +		init_uclamp_rq(cpu_rq(cpu));
> >  
> >  	for_each_clamp_id(clamp_id) {
> >  		uclamp_se_set(&init_task.uclamp_req[clamp_id],
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ