lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 14 Oct 2021 11:55:16 +0200
From:   Ulf Hansson <ulf.hansson@...aro.org>
To:     Hector Martin <marcan@...can.st>
Cc:     Viresh Kumar <viresh.kumar@...aro.org>,
        Sibi Sankar <sibis@...eaurora.org>,
        Saravana Kannan <saravanak@...gle.com>,
        Linux ARM <linux-arm-kernel@...ts.infradead.org>,
        Alyssa Rosenzweig <alyssa@...enzweig.io>,
        Sven Peter <sven@...npeter.dev>, Marc Zyngier <maz@...nel.org>,
        Mark Kettenis <mark.kettenis@...all.nl>,
        Michael Turquette <mturquette@...libre.com>,
        Stephen Boyd <sboyd@...nel.org>,
        Rob Herring <robh+dt@...nel.org>,
        Krzysztof Kozlowski <krzysztof.kozlowski@...onical.com>,
        Viresh Kumar <vireshk@...nel.org>, Nishanth Menon <nm@...com>,
        Catalin Marinas <catalin.marinas@....com>,
        "Rafael J. Wysocki" <rafael@...nel.org>,
        Kevin Hilman <khilman@...nel.org>,
        linux-clk <linux-clk@...r.kernel.org>,
        DTML <devicetree@...r.kernel.org>,
        Linux PM <linux-pm@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH 4/9] opp: core: Don't warn if required OPP device does
 not exist

On Tue, 12 Oct 2021 at 07:57, Hector Martin <marcan@...can.st> wrote:
>
> On 12/10/2021 14.51, Viresh Kumar wrote:
> > On 12-10-21, 14:34, Hector Martin wrote:
> >> The table *is* assigned to a genpd (the memory controller), it's just that
> >> that genpd isn't actually a parent of the CPU device. Without the patch you
> >> end up with:
> >>
> >> [    3.040060] cpu cpu4: Failed to set performance rate of cpu4: 0 (-19)
> >> [    3.042881] cpu cpu4: Failed to set required opps: -19
> >> [    3.045508] cpufreq: __target_index: Failed to change cpu frequency: -19
> >
> > Hmm, Saravana and Sibi were working on a similar problem earlier and decided to
> > solve this using devfreq instead. Don't remember the exact series which got
> > merged for this, Sibi ?
> >
> > If this part fails, how do you actually set the performance state of the memory
> > controller's genpd ?
>
> The clock controller has the genpd as an actual power-domain parent, so
> it does it instead. From patch #7:
>
> > +     if (cluster->has_pd)
> > +             dev_pm_genpd_set_performance_state(cluster->dev,
> > +                                                dev_pm_opp_get_required_pstate(opp, 0));
> > +
>
> This is arguably not entirely representative of how the hardware works,
> since technically the cluster switching couldn't care less what the
> memory controller is doing; it's a soft dependency, states that should
> be switched together but are not interdependent (in fact, the clock code
> does this unconditionally after the CPU p-state change, regardless of
> whether we're shifting up or down; this is, FWIW, the same order macOS
> uses, and it clearly doesn't matter which way you do it).

Yes, this sounds like you should move away from modeling the memory
part as a parent genpd for the CPUs' genpd.

As Viresh pointed out, a devfreq driver seems like a better way to do
this. As a matter of fact, there are already devfreq drivers that do
this, unless I am mistaken.

It looks like devfreq providers are listening to opp/cpufreq
notifiers, as to get an indication of when it could make sense to
change a performance state.

In some cases the devfreq provider is also modeled as an interconnect
provider, allowing consumers to specify memory bandwidth constraints,
which may trigger a new performance state to be set for the memory
controller.

In the tegra case, the memory controller is modelled as an
interconnect provider and the devfreq node is modelled as an
interconnect-consumer of the memory controller. Perhaps this can work
for apple SoCs too?

That said, perhaps as an option to move forward, we can try to get the
cpufreq pieces solved first. Then as a step on top, add the
performance scaling for the memory controller?

>
> --
> Hector Martin (marcan@...can.st)
> Public Key: https://mrcn.st/pub

Kind regards
Uffe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ