lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAJZ5v0hTC20LB1ifbGmesYQULGS4-uEfu2Tgc17OMftvFqvnJg@mail.gmail.com>
Date: Mon, 16 Dec 2024 18:25:34 +0100
From: "Rafael J. Wysocki" <rafael@...nel.org>
To: Pierre Gondois <pierre.gondois@....com>
Cc: "Rafael J. Wysocki" <rafael@...nel.org>, "Rafael J. Wysocki" <rjw@...ysocki.net>, 
	Linux PM <linux-pm@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>, 
	Lukasz Luba <lukasz.luba@....com>, Peter Zijlstra <peterz@...radead.org>, 
	Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>, Len Brown <len.brown@...el.com>, 
	Dietmar Eggemann <dietmar.eggemann@....com>, Morten Rasmussen <morten.rasmussen@....com>, 
	Vincent Guittot <vincent.guittot@...aro.org>, 
	Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>, 
	Christian Loehle <Christian.Loehle@....com>
Subject: Re: [RFC][PATCH v0.1 6/6] cpufreq: intel_pstate: Add basic EAS
 support on hybrid platforms

On Mon, Dec 16, 2024 at 4:33 PM Pierre Gondois <pierre.gondois@....com> wrote:
>
>
>
> On 11/19/24 18:20, Rafael J. Wysocki wrote:
> > On Mon, Nov 18, 2024 at 5:34 PM Pierre Gondois <pierre.gondois@....com> wrote:
> >>
> >>
> >>
> >> On 11/8/24 17:46, Rafael J. Wysocki wrote:
> >>> From: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
> >>>
> >>> Modify intel_pstate to register stub EM perf domains for CPUs on
> >>> hybrid platforms via em_dev_register_perf_domain() and to use
> >>> em_dev_expand_perf_domain() introduced previously for adding new
> >>> CPUs to existing EM perf domains when those CPUs become online for
> >>> the first time after driver initialization.
> >>>
> >>> This change is targeting platforms (for example, Lunar Lake) where
> >>> "small" CPUs (E-cores) are always more energy-efficient than the "big"
> >>> or "performance" CPUs (P-cores) when run at the same HWP performance
> >>> level, so it is sufficient to tell the EAS that E-cores are always
> >>> preferred (so long as there is enough spare capacity on one of them
> >>> to run the given task).
> >>>
> >>> Accordingly, the perf domains are registered per CPU type (that is,
> >>> all P-cores belong to one perf domain and all E-cores belong to another
> >>> perf domain) and they are registered only if asymmetric CPU capacity is
> >>> enabled.  Each perf domain has a one-element states table and that
> >>> element only contains the relative cost value (the other fields in
> >>> it are not initialized, so they are all equal to zero), and the cost
> >>> value for the E-core perf domain is lower.
> >>>
> >>> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
> >>> ---
> >>>    drivers/cpufreq/intel_pstate.c |  110 ++++++++++++++++++++++++++++++++++++++---
> >>>    1 file changed, 104 insertions(+), 6 deletions(-)
> >>>
> >>> Index: linux-pm/drivers/cpufreq/intel_pstate.c
> >>> ===================================================================
> >>> --- linux-pm.orig/drivers/cpufreq/intel_pstate.c
> >>> +++ linux-pm/drivers/cpufreq/intel_pstate.c
> >>> @@ -8,6 +8,7 @@
> >>>
> >>>    #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> >>>
> >>> +#include <linux/energy_model.h>
> >>>    #include <linux/kernel.h>
> >>>    #include <linux/kernel_stat.h>
> >>>    #include <linux/module.h>
> >>> @@ -938,6 +939,12 @@ static struct freq_attr *hwp_cpufreq_att
> >>>        NULL,
> >>>    };
> >>>
> >>> +enum hybrid_cpu_type {
> >>> +     HYBRID_PCORE = 0,
> >>> +     HYBRID_ECORE,
> >>> +     HYBRID_NR_TYPES
> >>> +};
> >>> +
> >>>    static struct cpudata *hybrid_max_perf_cpu __read_mostly;
> >>>    /*
> >>>     * Protects hybrid_max_perf_cpu, the capacity_perf fields in struct cpudata,
> >>> @@ -945,6 +952,86 @@ static struct cpudata *hybrid_max_perf_c
> >>>     */
> >>>    static DEFINE_MUTEX(hybrid_capacity_lock);
> >>>
> >>> +#ifdef CONFIG_ENERGY_MODEL
> >>> +struct hybrid_em_perf_domain {
> >>> +     cpumask_t cpumask;
> >>> +     struct device *dev;
> >>> +     struct em_data_callback cb;
> >>> +};
> >>> +
> >>> +static int hybrid_pcore_cost(struct device *dev, unsigned long freq,
> >>> +                          unsigned long *cost)
> >>> +{
> >>> +     /*
> >>> +      * The number used here needs to be higher than the analogous
> >>> +      * one in hybrid_ecore_cost() below.  The units and the actual
> >>> +      * values don't matter.
> >>> +      */
> >>> +     *cost = 2;
> >>> +     return 0;
> >>> +}
> >>> +
> >>> +static int hybrid_ecore_cost(struct device *dev, unsigned long freq,
> >>> +                          unsigned long *cost)
> >>> +{
> >>> +     *cost = 1;
> >>> +     return 0;
> >>> +}
> >>
> >> The artificial EM was introduced for CPPC based platforms since these platforms
> >> only provide an 'efficiency class' entry to describe the relative efficiency
> >> of CPUs. The case seems similar to yours.
> >
> > It is, but I don't particularly like the CPPC driver's approach to this.
> >
> >> 'Fake' OPPs were created to have an incentive for EAS to balance the load on
> >> the CPUs in one perf. domain. Indeed, in feec(), during the energy
> >> computation of a pd, if the cost is independent from the max_util value,
> >> then one CPU in the pd could end up having a high util, and another CPU a
> >> NULL util.
> >
> > Isn't this a consequence of disabling load balancing by EAS?
>
> Yes. Going in that direction, this patch from Vincent should help balancing
> the load in your case I think. The patch evaluates other factors when the energy
> cost of multiple CPU-candidates is the same.
>
> Meaning, if all CPUs of the same type have only one OPP, the number of tasks
> and the the load of the CPUs is then compared. This is not the case currently.
> Doing so will help to avoid having one CPU close to being overutilized while
> others are idle.
>
> However I think it would still be better to have multiple OPPs in the energy model.
> Indeed, it would be closer to reality as I assume that for Intel aswell, there
> might be frequency domains and the frequency of the domain is lead by the most
> utilized CPU.

There are a couple of problems with this on my target platforms.

First, it is not actually known what the real OPPs are and how the
coordination works.

For some cores (P-cores mostly) the voltage can be adjusted per-core
and for some others there are coordination domains, but the
coordination there involves idle states (for instance, one core may be
allowed to run at the max turbo frequency when the other ones in the
same domain are in idle states, but not otherwise) and dynamic
balancing (that is, the effective capacity depends on how much energy
is used by the domain over time).

Thus whatever is put into the perf states table will be way off most
of the time and there isn't even a good way to choose the numbers to
put in there.  Using the entire range of HWP P-states for that would
be completely impractical IMV because it would only increase overhead
for no real benefit.  Either it would need to be done per-CPU, which
doesn't really make sense because CPUs of the same type really share
the same cost-performance curve, or the assumption that the entire
domain is led by the most utilized CPU would need to be lifted to some
extent.  That would require some more intrusive changes in EAS which
I'd rather avoid unless the simplest approach doesn't work.

The second problem is that the current platforms are much smaller than
what we're expecting to see in the future and whatever is done today
needs to scale.

Also, I really wouldn't want to have to register special perf domains
for favored cores that share the cost-performance curve with the other
cores of the same type except that they can turbo-up higher if
everyone else is idle or similar.

> This would also avoid hitting corner cases. As if there is one big task and many
> small tasks, balancing on the number of tasks per CPU is not the best idea.
>
> https://lore.kernel.org/all/20240830130309.2141697-4-vincent.guittot@linaro.org/

Understood.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ