lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8c5b46d1-8b34-2ced-e27d-4d76f80953c6@arm.com>
Date:   Mon, 22 May 2017 16:02:23 +0100
From:   Sudeep Holla <sudeep.holla@....com>
To:     Daniel Lezcano <daniel.lezcano@...aro.org>
Cc:     Sudeep Holla <sudeep.holla@....com>, rjw@...ysocki.net,
        lorenzo.pieralisi@....com, leo.yan@...aro.org,
        "open list:CPUIDLE DRIVERS" <linux-pm@...r.kernel.org>,
        open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] ARM: cpuidle: Support asymmetric idle definition



On 22/05/17 15:48, Daniel Lezcano wrote:
> On 22/05/2017 15:02, Sudeep Holla wrote:
> 
> [ ... ]
> 
>>>>>>> +		drv->cpumask = &cpu_topology[cpu].core_sibling;
>>>>>>> +
>>>>>>
>>>>>> This is not always true and not architecturally guaranteed. So instead
>>>>>> of introducing this broken dependency, better to extract information
>>>>>> from the device tree.
>>>>>
>>>>> Can you give an example of a broken dependency ?
>>>>>
>>>>> The cpu topology information is extracted from the device tree. So
>>>>> if the topology is broken, the DT is broken also. Otherwise, the
>>>>> topology code must fix the broken dependency from the DT.
>>>>>
>>>>
>>>> No, I meant there's no guarantee that all designs must follow this rule.
>>>> I don't mean CPU topology code or binding is broken. What I meant is
>>>> linking CPU topology to CPU power domains is wrong. We should make use
>>>> of DT you infer this information as it's already there. Topology bindings
>>>> makes no reference to power and hence you simply can't infer that
>>>> information from it.
>>>
>>> Ok, I will have a look how power domains can fit in this.
>>>
>>> However I'm curious to know a platform with a cluster idle state
>>> powering down only a subset of CPUs belonging to the cluster.
>>>
>>
>> We can't reuse CPU topology for power domains:
>> 1. As I mentioned earlier for sure, it won't be same with ARM DynamIQ.
>> 2. Topology bindings strictly restrict themselves with topology and not
>> connected with power-domains. We also have separate power domain
>> bindings.
> 
> Yes, the theory is valid, but practically nowadays I don't see where we
> have a cluster defined by a topology with a different cluster power domain.
> 

While I agree that it's true in current practice, but in past we have
seen "innovative designs". We initially had 2 clusters(big and little)
then we saw 3 cluster(big little and tiny or whatever you what to call)
So as it's not architecturally guaranteed, it's not nice to make this
assumption in a generic driver.

> By the way, if you have any pointer to documentation for DynamIQ PM and
> design? I would be interested to have a look.
> 

I don't have anything in detail. Excerpts from the link I sent earlier
indicate that it's possible and highly likely.

"DynamIQ supports multiple, configurable, performance domains within a
single cluster. These domains, consisting of single or multiple ARM
CPUs, can scale in performance and power with finer granularity than
previous quad-core clusters."

>> We need to separate topology and power domains. We have some dependency
>> like this in big little drivers(both CPUfreq and CPUIdle) but that
>> dependencies must be removed as they are not architecturally guaranteed.
>> Lorenzo had a patch[1] to solve this issue, I can post the latest
>> version of it again and continue the discussion after some basic
>> rebase/testing.
> 
> Actually, I am not convinced by the approach proposed in this patch.
> 
> Let me have a look at the idle power domain before, I do believe we can
> do something much more simple.
> 

OK, if you think so.

-- 
Regards,
Sudeep

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ