[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210217113011.GA22176@arm.com>
Date: Wed, 17 Feb 2021 11:30:11 +0000
From: Ionela Voinescu <ionela.voinescu@....com>
To: Viresh Kumar <viresh.kumar@...aro.org>
Cc: Rafael Wysocki <rjw@...ysocki.net>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
linux-pm@...r.kernel.org, Sudeep Holla <sudeep.holla@....com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Subject: Re: [PATCH V3 1/2] topology: Allow multiple entities to provide
sched_freq_tick() callback
Hi,
Replying this first as it's going to be relevant below:
> Just out of curiosity, what exactly did you test and what was the setup ? :)
I tested it on:
- Juno R0 (CPUs [0, 3-5] are littles, CPUs [1-2] are bigs)
+ PMUs faking AMUs
+ userspace/schedutil +
+ cpufreq-FIE/!cpufreq-FIE
+ DT
This testing did not yet cover patch 2/2.
My checklist shows:
- system invariance status correct - passed
- scale factor correct (userspace cpufreq governor) - passed
- arch_set_freq_scale bypassed - passed
- partial "AMUs" support - failed (see below)
- EAS enabling - passed
I don't have an automated process for this as many test cases involve
kernel source changes. In time I will automate all of these and
possibly cover all scenarios with FVP (fast models) testing, but for
now human error is possible :).
On Wednesday 17 Feb 2021 at 09:55:58 (+0530), Viresh Kumar wrote:
> On 17-02-21, 00:24, Ionela Voinescu wrote:
> > I think it could be merged in patch 1/2 as it's part of enabling the use
> > of multiple sources of information for FIE. Up to you!
>
> Sure.
>
> > > static void amu_fie_setup(const struct cpumask *cpus)
> > > {
> > > - bool invariant;
> > > int cpu;
> > >
> > > /* We are already set since the last insmod of cpufreq driver */
> > > @@ -257,25 +256,10 @@ static void amu_fie_setup(const struct cpumask *cpus)
> > >
> > > cpumask_or(amu_fie_cpus, amu_fie_cpus, cpus);
> > >
> > > - invariant = topology_scale_freq_invariant();
> > > -
> > > - /* We aren't fully invariant yet */
> > > - if (!invariant && !cpumask_equal(amu_fie_cpus, cpu_present_mask))
> > > - return;
> > > -
> >
> > You still need these checks, otherwise you could end up with only part
> > of the CPUs setting a scale factor, when only part of the CPUs support
> > AMUs and there is no cpufreq support for FIE.
>
> Both supports_scale_freq_counters() and topology_scale_freq_invariant() take
> care of this now and they will keep reporting the system as invariant until the
> time all the CPUs have counters (in absence of cpufreq).
>
Correct!
> The topology_set_scale_freq_source() API is supposed to be called multiple
> times, probably once for each policy and so I don't see a need of these checks
> anymore.
>
The problem is not topology_scale_freq_invariant() but whether a scale
factor is set for some CPUs.
Scenario (test system above):
- "AMUs" are only supported for [1-2],
- cpufreq_supports_freq_invariance() -> false
What should happen:
- topology_scale_freq_invariant() -> false (passed)
- all CPUs should have their freq_scale unmodified (1024) - (failed)
because only 2 out of 6 CPUs have a method of setting a scale factor
What does happen:
- arch_set_freq_tick() -> topology_set_freq_tick() will set a scale
factor for [1-2] based on AMUs. This should not happen. We will end
up with invariant signals for bigs and signals that are not freq
invariant for littles.
Ionela.
> > Small(ish) optimisation at the beginning of this function:
> >
> > if (cpumask_empty(&scale_freq_counters_mask))
> > scale_freq_invariant = topology_scale_freq_invariant();
> >
> > This will save you a call to rebuild_sched_domains_energy(), which is
> > quite expensive, when cpufreq supports FIE and we also have counters.
>
> Good Point.
>
> > After comments addressed,
> >
> > Reviewed-by: Ionela Voinescu <ionela.voinescu@....com>
>
> Thanks.
>
> > Tested-by: Ionela Voinescu <ionela.voinescu@....com>
>
>
> --
> viresh
Powered by blists - more mailing lists