lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210526134921.GA414265@e120877-lin.cambridge.arm.com>
Date:   Wed, 26 May 2021 14:49:22 +0100
From:   Vincent Donnefort <vincent.donnefort@....com>
To:     Viresh Kumar <viresh.kumar@...aro.org>
Cc:     peterz@...radead.org, rjw@...ysocki.net,
        vincent.guittot@...aro.org, qperret@...gle.com,
        linux-kernel@...r.kernel.org, ionela.voinescu@....com,
        lukasz.luba@....com, dietmar.eggemann@....com
Subject: Re: [PATCH v2 0/3] EM / PM: Inefficient OPPs

On Wed, May 26, 2021 at 03:08:07PM +0530, Viresh Kumar wrote:
> On 26-05-21, 10:01, Vincent Donnefort wrote:
> > I originally considered to add the inefficient knowledge into the CPUFreq table.
> 
> I wasn't talking about the cpufreq table here in the beginning, but calling
> dev_pm_opp_disable(), which will eventually reflect in cpufreq table as well.
> 
> > But I then gave up the idea for two reasons:
> > 
> >   * The EM depends on having schedutil enabled. I don't think that any
> >     other governor would then manage to rely on the inefficient OPPs. (also I
> >     believe Peter had a plan to keep schedutil as the one and only governor)
> 
> Right, that EM is only there for schedutil.
> 
> I would encourage if this can be done even without the EM dependency, if
> possible. It would be a good thing to do generally for any driver that wants to
> do that.
> 
> >   * The CPUfreq driver doesn't have to rely on the CPUfreq table, if the
> >     knowledge about inefficient OPPs is into the latter, some drivers might not
> >     be able to rely on the feature (you might say 'their loss' though :)) 
> > 
> > For those reasons, I thought that adding inefficient support into the
> > CPUfreq table would complexify a lot the patchset for no functional gain. 
> 
> What about disabling the OPP in the OPP core itself ? So every user will get the
> same picture.
> 
> > > 
> > > Since the whole thing depends on EM and OPPs, I think we can actually do this.
> > > 
> > > When the cpufreq driver registers with the EM core, lets find all the
> > > Inefficient OPPs and disable them once and for all. Of course, this must be done
> > > on voluntarily basis, a flag from the drivers will do. With this, we won't be
> > > required to update any thing at any of the governors end.
> > 
> > We still need to keep the inefficient OPPs for thermal reason.
> 
> How will that benefit us if that OPP is never going to run anyway ? We won't be
> cooling down the CPU then, isn't it ?

It would give more freedom for the cooling framework to pick a lower frequency
to mitigate the current temperature even if we know this isn't, energy
efficient.

As an example, on the Pixel4's SD855, the first 6 OPPs are inefficients on one
of the cluster. If we hide those from the cooling framework, we'll prevent
cooling for a quite wide range of frequencies.

That'd be however much more intrusive to support into cpufreq than just
preventing the OPPs to be registered.

> 
> > But if we go with
> > the inefficiency support into the CPUfreq table, we could enable or disable
> > them, depending on the thermal pressure. Or add a flag to read the table with or
> > without inefficient OPPs?
> 
> Yeah, I was looking for a cpufreq driver flag or something like that so OPPs
> don't disappear magically for some platforms which don't want it to happen.
> 
> Moreover, a cpufreq driver first creates the OPP table, then registers with EM
> or thermal. If we can play with that sequence a bit and make sure inefficient
> OPPs are disabled before thermal or cpufreq tables are created, we will be good.
> 
> -- 
> viresh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ