lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181012152019.GF2371@codeaurora.org>
Date:   Fri, 12 Oct 2018 09:20:19 -0600
From:   Lina Iyer <ilina@...eaurora.org>
To:     "Rafael J. Wysocki" <rafael@...nel.org>
Cc:     "Rafael J. Wysocki" <rjw@...ysocki.net>, rplsssn@...eaurora.org,
        Andy Gross <andy.gross@...aro.org>, david.brown@...aro.org,
        Ulf Hansson <ulf.hansson@...aro.org>,
        Kevin Hilman <khilman@...nel.org>,
        linux-arm-msm <linux-arm-msm@...r.kernel.org>,
        linux-soc@...r.kernel.org,
        "Nayak, Rajendra" <rnayak@...eaurora.org>,
        bjorn.andersson@...aro.org,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Linux PM <linux-pm@...r.kernel.org>,
        "devicetree@...r.kernel.org" <devicetree@...r.kernel.org>,
        Stephen Boyd <sboyd@...nel.org>, evgreen@...omium.org,
        Doug Anderson <dianders@...omium.org>,
        Matthias Kaehlcke <mka@...omium.org>
Subject: Re: [PATCH RFC v1 2/8] kernel/cpu_pm: Manage runtime PM in the idle
 path for CPUs

On Fri, Oct 12 2018 at 01:43 -0600, Rafael J. Wysocki wrote:
>On Fri, Oct 12, 2018 at 12:08 AM Lina Iyer <ilina@...eaurora.org> wrote:
>> On Thu, Oct 11 2018 at 14:56 -0600, Rafael J. Wysocki wrote:
>> >On Wednesday, October 10, 2018 11:20:49 PM CEST Raju P.L.S.S.S.N wrote:
>> >> From: Ulf Hansson <ulf.hansson@...aro.org>

>> The cluster states should account for that additional latency.
>
>But even then, you need to be sure that the idle governor selected
>"cluster" states for all of the CPUs in the cluster.  It might select
>WFI for one of them for reasons unrelated to the distance to the next
>timer (so to speak), for example.
>
Well, if cpuidle chooses WFI, cpu_pm_enter() will not be called. So for
that case we are okay with this approach.

>> Just the CPU's power down states need not care about that.
>
>The meaning of this sentence isn't particularly clear to me. :-)
>
What I meant to say is that if cpuidle chooses a CPU only power down
state, then, atleast in ARM architecture, we would not choose to power
down the cluster in the firmware. To power down the cluster in the
firmware, all CPUs need to choose a cluster state, which would account
the additional latency of powering off and on the domain.

How I ever thought that I could convey this point in that line is beyond
me now. Sorry!

>> But, it would be nice if the PM domain governor could be cognizant of
>> the idle state chosen for each CPU, that way we dont configure the
>> domain to be powered off when the CPUs have just chosen to power down
>> (not chosen a cluster state). I think that is a whole different topic to
>> discuss.
>
>This needs to be sorted out before the approach becomes viable, though.
>
We embarked on that discussion a few years ago, but realized that there
is a lot more complexity involved in specifying that especially with DT.
I believe ACPI has a way to specify this. But DT and driver code
currently don't have a nice way to propagate this requirement to the
domain governor. So we shelved it for the future.

>Basically, the domain governor needs to track what the idle governor
>did for all of the CPUs in the domain and only let the domain go off
>if the latency matches all of the states selected by the idle
>governor.  Otherwise the idle governor's assumptions would be violated
>and it would become essentially useless overhead.
>
Well, we kinda do that in the CPU PM domain governor. By looking at the
next wakeup and the latency/QoS requirement of each CPU in the domain,
we determine if the domain can be powered off. But, if we were to do
this by correlating domain idle states to that of the required CPU idle
state, then a lot needs to plumbed in to the cpuidle and driver model.
The current approach is rather simple while meeting most of the
requirement.

Thanks,
Lina

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ