lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 20 Jun 2023 19:36:36 +0200
From:   "Rafael J. Wysocki" <rafael@...nel.org>
To:     Lukasz Luba <lukasz.luba@....com>
Cc:     linux-kernel@...r.kernel.org, linux-trace-kernel@...r.kernel.org,
        rafael@...nel.org, linux-pm@...r.kernel.org, rostedt@...dmis.org,
        mhiramat@...nel.org, mingo@...hat.com, peterz@...radead.org,
        juri.lelli@...hat.com, vincent.guittot@...aro.org,
        dietmar.eggemann@....com, bsegall@...gle.com, mgorman@...e.de,
        bristot@...hat.com, vschneid@...hat.com, delyank@...com,
        qyousef@...gle.com, qyousef@...alina.io
Subject: Re: [RESEND][PATCH v2 2/3] cpufreq: schedutil: Refactor
 sugov_update_shared() internals

On Mon, May 22, 2023 at 4:57 PM Lukasz Luba <lukasz.luba@....com> wrote:
>
> Remove the if section block. Use the simple check to bail out
> and jump to the unlock at the end. That makes the code more readable
> and ready for some future tracing.
>
> Signed-off-by: Lukasz Luba <lukasz.luba@....com>
> ---
>  kernel/sched/cpufreq_schedutil.c | 20 +++++++++++---------
>  1 file changed, 11 insertions(+), 9 deletions(-)
>
> diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> index e3211455b203..f462496e5c07 100644
> --- a/kernel/sched/cpufreq_schedutil.c
> +++ b/kernel/sched/cpufreq_schedutil.c
> @@ -446,17 +446,19 @@ sugov_update_shared(struct update_util_data *hook, u64 time, unsigned int flags)
>
>         ignore_dl_rate_limit(sg_cpu);
>
> -       if (sugov_should_update_freq(sg_policy, time)) {
> -               next_f = sugov_next_freq_shared(sg_cpu, time);
> +       if (!sugov_should_update_freq(sg_policy, time))
> +               goto unlock;
>
> -               if (!sugov_update_next_freq(sg_policy, time, next_f))
> -                       goto unlock;
> +       next_f = sugov_next_freq_shared(sg_cpu, time);
> +
> +       if (!sugov_update_next_freq(sg_policy, time, next_f))
> +               goto unlock;
> +
> +       if (sg_policy->policy->fast_switch_enabled)
> +               cpufreq_driver_fast_switch(sg_policy->policy, next_f);
> +       else
> +               sugov_deferred_update(sg_policy);
>
> -               if (sg_policy->policy->fast_switch_enabled)
> -                       cpufreq_driver_fast_switch(sg_policy->policy, next_f);
> -               else
> -                       sugov_deferred_update(sg_policy);
> -       }
>  unlock:
>         raw_spin_unlock(&sg_policy->update_lock);
>  }
> --

The first patch in the series needs some feedback from the scheduler
people, but I can apply this one right away if you want me to.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ