lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0e93db03-5b1f-4c18-b8da-03cdf82492be@kernel.org>
Date: Wed, 12 Feb 2025 16:03:37 -0600
From: Mario Limonciello <superm1@...nel.org>
To: Dhananjay Ugwekar <Dhananjay.Ugwekar@....com>,
 "Gautham R . Shenoy" <gautham.shenoy@....com>,
 Perry Yuan <perry.yuan@....com>
Cc: "open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)"
 <linux-kernel@...r.kernel.org>,
 "open list:CPU FREQUENCY SCALING FRAMEWORK" <linux-pm@...r.kernel.org>,
 Mario Limonciello <mario.limonciello@....com>
Subject: Re: [PATCH 03/14] cpufreq/amd-pstate: Move perf values into a union

On 2/12/2025 00:31, Dhananjay Ugwekar wrote:
> On 2/12/2025 3:44 AM, Mario Limonciello wrote:
>> On 2/10/2025 07:38, Dhananjay Ugwekar wrote:
>>> On 2/7/2025 3:26 AM, Mario Limonciello wrote:
>>>> From: Mario Limonciello <mario.limonciello@....com>
>>>>
>>>> By storing perf values in a union all the writes and reads can
>>>> be done atomically, removing the need for some concurrency protections.
>>>>
>>>> While making this change, also drop the cached frequency values,
>>>> using inline helpers to calculate them on demand from perf value.
>>>>
>>>> Signed-off-by: Mario Limonciello <mario.limonciello@....com>
>>>> ---
> [Snip]
>>>>      static int amd_pstate_update_freq(struct cpufreq_policy *policy,
>>>>                      unsigned int target_freq, bool fast_switch)
>>>>    {
>>>>        struct cpufreq_freqs freqs;
>>>> -    struct amd_cpudata *cpudata = policy->driver_data;
>>>> +    struct amd_cpudata *cpudata;
>>>> +    union perf_cached perf;
>>>>        u8 des_perf;
>>>>          amd_pstate_update_min_max_limit(policy);
>>>>    +    cpudata = policy->driver_data;
>>>
>>> Any specific reason why we moved this dereferencing after amd_pstate_update_min_max_limit() ?
>>
>> Closer to the first use.
>>
>>>
>>>> +    perf = READ_ONCE(cpudata->perf);
>>>> +
>>>>        freqs.old = policy->cur;
>>>>        freqs.new = target_freq;
>>>>    -    des_perf = freq_to_perf(cpudata, target_freq);
>>>> +    des_perf = freq_to_perf(perf, cpudata->nominal_freq, target_freq);
>>>
>>> Personally I preferred the earlier 2 argument format for the helper functions, as the helper
>>> function handled the common dereferencing part, (i.e. cpudata->perf and cpudata->nominal_freq)
>>
>> Something like this?
>>
>> static inline u8 freq_to_perf(struct amd_cpudata *cpudata, unsigned int freq_val)
>> {
>>      union perf_cached perf = READ_ONCE(cpudata->perf);
>>      u8 perf_val = DIV_ROUND_UP_ULL((u64)freq_val * perf.nominal_perf, cpudata->nominal_freq);
>>
>>      return clamp_t(u8, perf_val, perf.lowest_perf, perf.highest_perf);
>> }
>>
>> As an example in practice of what that turns into with inline code it should be:
>>
>> static void amd_pstate_update_min_max_limit(struct cpufreq_policy *policy)
>> {
>>      struct amd_cpudata *cpudata = policy->driver_data;
>>      union perf_cached perf = READ_ONCE(cpudata->perf);
>>      union perf_cached perf2 = READ_ONCE(cpudata->perf);
>>      union perf_cached perf3 = READ_ONCE(cpudata->perf);
>>      u8 val1 = DIV_ROUND_UP_ULL((u64)policy->max * perf2.nominal_perf, cpudata->nominal_freq);
>>      u8 val2 = DIV_ROUND_UP_ULL((u64)policy->min * perf2.nominal_perf, cpudata->nominal_freq);
>>
>>      perf.max_limit_perf = clamp_t(u8, val1, perf2.lowest_perf, perf2.highest_perf);
>>      perf.min_limit_perf = clamp_t(u8, val2, perf3.lowest_perf, perf3.highest_perf);
>> .
>> .
>> .
>>
>> So now that's 3 reads for cpudata->perf in every use.
> 
> Yea, right, its a tradeoff, in clean looking code vs less computations.
> I'll leave it upto you, I'm okay either way.
> 

OK - I think I'll leave it like it is now for the next spin, and let 
Gautham be the tie breaker when he reviews it if he doesn't like it.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ