lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <fc39e67bdf397f0aa51f00c58d135ac0@linux.dev>
Date:   Sat, 25 Mar 2023 02:37:16 +0000
From:   "Yajun Deng" <yajun.deng@...ux.dev>
To:     "Lukasz Luba" <lukasz.luba@....com>
Cc:     linux-pm@...r.kernel.org, linux-kernel@...r.kernel.org,
        rostedt@...dmis.org, dietmar.eggemann@....com,
        vincent.guittot@...aro.org, mingo@...hat.com, vschneid@...hat.com,
        bristot@...hat.com, bsegall@...gle.com, juri.lelli@...hat.com,
        peterz@...radead.org, mgorman@...e.de, viresh.kumar@...aro.org,
        rafael@...nel.org
Subject: Re: [PATCH] cpufreq: schedutil: Combine two loops into one in
 sugov_start()

March 24, 2023 6:46 PM, "Lukasz Luba" <lukasz.luba@....com> wrote:

> Hi Yajun,
> 
> On 3/24/23 10:00, Yajun Deng wrote:
> 
>> The sugov_start() function currently contains two for loops that
>> traverse the CPU list and perform some initialization tasks. The first
>> loop initializes each sugov_cpu struct and assigns the CPU number and
>> sugov_policy pointer. The second loop sets up the update_util hook for
>> each CPU based on the policy type.
>> Since both loops operate on the same CPU list, it is possible to combine
>> them into a single for loop. This simplifies the code and reduces the
>> number of times the CPU list needs to be traversed, which can improve
>> performance.
>> Signed-off-by: Yajun Deng <yajun.deng@...ux.dev>
>> ---
>> kernel/sched/cpufreq_schedutil.c | 12 ++++--------
>> 1 file changed, 4 insertions(+), 8 deletions(-)
>> diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
>> index e3211455b203..9a28ebbb9c1e 100644
>> --- a/kernel/sched/cpufreq_schedutil.c
>> +++ b/kernel/sched/cpufreq_schedutil.c
>> @@ -766,14 +766,6 @@ static int sugov_start(struct cpufreq_policy *policy)
>>> sg_policy->need_freq_update = cpufreq_driver_test_flags(CPUFREQ_NEED_UPDATE_LIMITS);
>>> - for_each_cpu(cpu, policy->cpus) {
>> - struct sugov_cpu *sg_cpu = &per_cpu(sugov_cpu, cpu);
>> -
>> - memset(sg_cpu, 0, sizeof(*sg_cpu));
>> - sg_cpu->cpu = cpu;
>> - sg_cpu->sg_policy = sg_policy;
>> - }
>> -
>> if (policy_is_shared(policy))
>> uu = sugov_update_shared;
>> else if (policy->fast_switch_enabled && cpufreq_driver_has_adjust_perf())
>> @@ -784,6 +776,10 @@ static int sugov_start(struct cpufreq_policy *policy)
>> for_each_cpu(cpu, policy->cpus) {
>> struct sugov_cpu *sg_cpu = &per_cpu(sugov_cpu, cpu);
>>> + memset(sg_cpu, 0, sizeof(*sg_cpu));
>> + sg_cpu->cpu = cpu;
>> + sg_cpu->sg_policy = sg_policy;
>> +
>> cpufreq_add_update_util_hook(cpu, &sg_cpu->update_util, uu);
>> }
>> return 0;
> 
> IMO the change might cause a race.
> There is a call to set scheduler hook in the 2nd loop.
> If you combine two loops that hook might be used
> from other CPU in the meantime, while still the rest CPUs are not
> finished.
> The first loop makes sure all CPUs in the 'policy->cpus' get a clean
> context 'sg_cpu' and proper 'cpu' values first (and 'sg_policy' as
> well). When the two loops are combined, there might be fast usage
> from scheduler on other CPU the sugov code path.
> 
> If the policy is shared for many CPUs and any of them is able to
> change the freq, then some CPU can enter this code flow, where
> remotely wants to check the other CPUs' utilization:
> 
> sugov_next_freq_shared()
> for each cpu in policy->cpus:
> sugov_get_util()
> where the 'sg_cpu->cpu' is used
> 
> Therefore, IMO this optimization in a start function (which is
> only called once and is not part of the 'hot path') is not
> worth the race risk.
>

Ok, Got it. Thanks!
 
> Regards
> Lukasz

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ