lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 29 Feb 2024 11:45:41 +0000
From: Lukasz Luba <lukasz.luba@....com>
To: Cristian Marussi <cristian.marussi@....com>
Cc: Sibi Sankar <quic_sibis@...cinc.com>, sudeep.holla@....com,
 linux-arm-kernel@...ts.infradead.org, pierre.gondois@....com,
 dietmar.eggemann@....com, morten.rasmussen@....com, viresh.kumar@...aro.org,
 rafael@...nel.org, linux-pm@...r.kernel.org, linux-kernel@...r.kernel.org,
 quic_mdtipton@...cinc.com, linux-arm-msm@...r.kernel.org
Subject: Re: [PATCH V3 2/2] cpufreq: scmi: Register for limit change
 notifications



On 2/29/24 11:28, Cristian Marussi wrote:
> On Thu, Feb 29, 2024 at 10:22:39AM +0000, Lukasz Luba wrote:
>>
>>
>> On 2/29/24 09:59, Lukasz Luba wrote:
>>>
>>>
>>> On 2/28/24 17:00, Sibi Sankar wrote:
>>>>
>>>>
>>>> On 2/28/24 18:54, Lukasz Luba wrote:
>>>>>
>>>>>
>>>>> On 2/27/24 18:16, Sibi Sankar wrote:
>>>>>> Register for limit change notifications if supported and use
>>>>>> the throttled
>>>>>> frequency from the notification to apply HW pressure.
>>>>
>>>> Lukasz,
>>>>
>>>> Thanks for taking time to review the series!
>>>>
>>>>>>
>>>>>> Signed-off-by: Sibi Sankar <quic_sibis@...cinc.com>
>>>>>> ---
>>>>>>
>>>>>> v3:
>>>>>> * Sanitize range_max received from the notifier. [Pierre]
>>>>>> * Update commit message.
>>>>>>
>>>>>> � drivers/cpufreq/scmi-cpufreq.c | 29 ++++++++++++++++++++++++++++-
>>>>>> � 1 file changed, 28 insertions(+), 1 deletion(-)
>>>>>>
>>>>>> diff --git a/drivers/cpufreq/scmi-cpufreq.c
>>>>>> b/drivers/cpufreq/scmi-cpufreq.c
>>>>>> index 76a0ddbd9d24..78b87b72962d 100644
>>>>>> --- a/drivers/cpufreq/scmi-cpufreq.c
>>>>>> +++ b/drivers/cpufreq/scmi-cpufreq.c
>>>>>> @@ -25,9 +25,13 @@ struct scmi_data {
>>>>>> ����� int domain_id;
>>>>>> ����� int nr_opp;
>>>>>> ����� struct device *cpu_dev;
>>>>>> +��� struct cpufreq_policy *policy;
>>>>>> ����� cpumask_var_t opp_shared_cpus;
>>>>>> +��� struct notifier_block limit_notify_nb;
>>>>>> � };
>>>>>> +const struct scmi_handle *handle;
>>>
>>> I've missed this bit here.
>>
>> So for this change we actually have to ask Cristian or Sudeep
>> because I'm not sure if we have only one 'handle' instance
>> for all cpufreq devices.
>>
>> If we have different 'handle' we cannot move it to the
>> global single pointer.
>>
>> Sudeep, Cristian what do you think?
> 
> I was just replying noticing this :D .... since SCMI drivers can be
> probed multiple times IF you defined multiple scmi top nodes in your DT
> containing the same protocol nodes, they receive a distinct sdev/handle/ph
> for each probe...so any attempt to globalize these wont work...BUT...
> 
> ...this is a bit of a weird setup BUT it is not against the spec and it can
> be used to parallelize more the SCMI accesses to disjont set of resources
> within the same protocol (a long story here...) AND this type of setup is
> something that it is already used by some other colleagues of Sibi working
> on a different line of products (AFAIK)...
> 
> So, for these reasons, usually, all the other SCMI drivers have per-instance
> non-global references to handle/sdev/ph....
> 
> ...having said that, thought, looking at the structure of CPUFReq
> drivers, I am not sure that they can stand such a similar setup
> where multiple instances of this same driver are probed
> 
> .... indeed the existent *ph refs above is already global....so it wont already
> work anyway in case of multiple instances now...
> 
> ...and if I look at how CPUFreq expects the signature of scmi_cpufreq_get_rate()
> to be annd how it is implemented now using the global *ph reference, it is
> clearly already not working cleanly on a multi-instance setup...
> 
> ...now...I can imagine how to (maybe) fix the above removing the globals and
> fixing this, BUT the question, more generally, is CPUFreq supposed to work at all in
> this multi-probed mode of operation ?
> Does it even make sense to be able to support this in CPUFREQ ?
> 
> (as an example in cpufreq,c there is static global cpufreq_driver
>   pointing to the arch-specific configured driver BUT that also holds
>   some .driver_data AND that cleraly wont be instance specific if you
>   probe multiple times and register with CPUFreq multiple times...)
> 
>   More questions than answers here :D
> 

Thanks Cristian for instant response. Yes, indeed now we have more
questions :) (which is good). But that's good description of the
situation.

So lets consider a few option what we could do now:
1. Let Sibi add another global state the 'handle' but add
    a BUG_ON() or WARN_ON() in the probe path if the next
    'handle' instance is different than already set in global.
    This would simply mean that we don't support (yet)
    such configuration in a platform. As you said, we
    already have the *ph global, so maybe such platforms
    with multiple instances for this particular cpufreq and
    performance protocol don't exist yet.
2. Ask Sibi to wait with this change, till we refactor the
    exiting driver such that it could support easily those
    multiple instances. Then pick up this patch set.
    Although, we would also like to have those notifications from our
    Juno SCP reference FW, so the feature is useful.
3. Ask Sibi to refactor his patch to somehow get the 'handle'
    in different way, using exiting code and not introduce this global.

IHMO we could do this in steps: 1. and then 2. When
we create some mock platform to test this refactoring we can
start cleaning it.

What do you think?

Regards,
Lukasz

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ