[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190716140706.vuggfigjlys44lkp@e110439-lin>
Date: Tue, 16 Jul 2019 15:07:06 +0100
From: Patrick Bellasi <patrick.bellasi@....com>
To: Michal Koutný <mkoutny@...e.com>
Cc: linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Tejun Heo <tj@...nel.org>,
"Rafael J . Wysocki" <rafael.j.wysocki@...el.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Viresh Kumar <viresh.kumar@...aro.org>,
Paul Turner <pjt@...gle.com>,
Quentin Perret <quentin.perret@....com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Morten Rasmussen <morten.rasmussen@....com>,
Juri Lelli <juri.lelli@...hat.com>,
Todd Kjos <tkjos@...gle.com>,
Joel Fernandes <joelaf@...gle.com>,
Steve Muckle <smuckle@...gle.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Alessio Balsini <balsini@...roid.com>
Subject: Re: [PATCH v11 2/5] sched/core: uclamp: Propagate parent clamps
Hi Michal,
On 15-Jul 18:42, Michal Koutný wrote:
> On Mon, Jul 08, 2019 at 09:43:54AM +0100, Patrick Bellasi <patrick.bellasi@....com> wrote:
> > Since it's possible for a cpu.uclamp.min value to be bigger than the
> > cpu.uclamp.max value, ensure local consistency by restricting each
> > "protection"
> > (i.e. min utilization) with the corresponding "limit" (i.e. max
> > utilization).
> I think this constraint should be mentioned in the Documentation/....
That note comes from the previous review cycle and it's based on a
request from Tejun to align uclamp behaviors with the way the
delegation model is supposed to work.
I guess this part of the documentation:
https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html?highlight=protections#resource-distribution-models
should already cover the expected uclamp min/max behaviors.
However, I guess "repetita iuvant" in this case. I'll call this out
explicitly in the description of cpu.uclamp.min.
> > +static void cpu_util_update_eff(struct cgroup_subsys_state *css)
> > +{
> > + struct cgroup_subsys_state *top_css = css;
> > + struct uclamp_se *uc_se = NULL;
> > + unsigned int eff[UCLAMP_CNT];
> > + unsigned int clamp_id;
> > + unsigned int clamps;
> > +
> > + css_for_each_descendant_pre(css, top_css) {
> > + uc_se = css_tg(css)->parent
> > + ? css_tg(css)->parent->uclamp : NULL;
> > +
> > + for_each_clamp_id(clamp_id) {
> > + /* Assume effective clamps matches requested clamps */
> > + eff[clamp_id] = css_tg(css)->uclamp_req[clamp_id].value;
> > + /* Cap effective clamps with parent's effective clamps */
> > + if (uc_se &&
> > + eff[clamp_id] > uc_se[clamp_id].value) {
> > + eff[clamp_id] = uc_se[clamp_id].value;
> > + }
> > + }
> > + /* Ensure protection is always capped by limit */
> > + eff[UCLAMP_MIN] = min(eff[UCLAMP_MIN], eff[UCLAMP_MAX]);
> > +
> > + /* Propagate most restrictive effective clamps */
> > + clamps = 0x0;
> > + uc_se = css_tg(css)->uclamp;
> (Nitpick only, reassigning child where was parent before decreases
> readibility. IMO)
Did not checked but I think the compiler will figure out it can still
use a single pointer for both assignments.
I'll let's the compiler to its job and add in a dedicated stack var
for the parent pointer.
> > + for_each_clamp_id(clamp_id) {
> > + if (eff[clamp_id] == uc_se[clamp_id].value)
> > + continue;
> > + uc_se[clamp_id].value = eff[clamp_id];
> > + uc_se[clamp_id].bucket_id = uclamp_bucket_id(eff[clamp_id]);
> Shouldn't these writes be synchronized with writes from
> __setscheduler_uclamp()?
You right, the synchronization is introduced by a later patch:
sched/core: uclamp: Update CPU's refcount on TG's clamp changes
Cheers,
Patrick
--
#include <best/regards.h>
Patrick Bellasi
Powered by blists - more mailing lists