lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtD0BmFFc+Hce-iwDXF5+P1jm16+sW74-u0WZO1-9x5=eQ@mail.gmail.com>
Date: Thu, 7 Mar 2024 17:50:57 +0100
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Shrikanth Hegde <sshegde@...ux.ibm.com>
Cc: mingo@...nel.org, peterz@...radead.org, yu.c.chen@...el.com, 
	dietmar.eggemann@....com, linux-kernel@...r.kernel.org, nysal@...ux.ibm.com, 
	aboorvad@...ux.ibm.com, srikar@...ux.ibm.com, vschneid@...hat.com, 
	pierre.gondois@....com, qyousef@...alina.io
Subject: Re: [PATCH v6 3/3] sched/fair: Combine EAS check with overutilized access

On Thu, 7 Mar 2024 at 09:58, Shrikanth Hegde <sshegde@...ux.ibm.com> wrote:
>
> Access to overutilized is always used with sched_energy_enabled in
> the pattern:
>
> if (sched_energy_enabled && !overutilized)
>        do something
>
> So modify the helper function to return this pattern. This is more
> readable code as it would say, do something when root domain is not
> overutilized.
>
> No change in functionality intended.
>
> Suggested-by: Vincent Guittot <vincent.guittot@...aro.org>
> Signed-off-by: Shrikanth Hegde <sshegde@...ux.ibm.com>

Reviewed-by: Vincent Guittot <vincent.guittot@...aro.org>


> ---
>  kernel/sched/fair.c | 24 +++++++++---------------
>  1 file changed, 9 insertions(+), 15 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 87e08a252f94..bcda18a2ccfe 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6676,12 +6676,11 @@ static inline bool cpu_overutilized(int cpu)
>  }
>
>  /*
> - * Ensure that caller can do EAS. overutilized value
> - * make sense only if EAS is enabled
> + * overutilized value make sense only if EAS is enabled
>   */
> -static inline int is_rd_overutilized(struct root_domain *rd)
> +static inline int is_rd_not_overutilized(struct root_domain *rd)
>  {
> -       return READ_ONCE(rd->overutilized);
> +       return sched_energy_enabled() && !READ_ONCE(rd->overutilized);
>  }
>
>  static inline void set_rd_overutilized_status(struct root_domain *rd,
> @@ -6700,10 +6699,8 @@ static inline void check_update_overutilized_status(struct rq *rq)
>          * overutilized field is used for load balancing decisions only
>          * if energy aware scheduler is being used
>          */
> -       if (!sched_energy_enabled())
> -               return;
>
> -       if (!is_rd_overutilized(rq->rd) && cpu_overutilized(rq->cpu))
> +       if (is_rd_not_overutilized(rq->rd) && cpu_overutilized(rq->cpu))
>                 set_rd_overutilized_status(rq->rd, SG_OVERUTILIZED);
>  }
>  #else
> @@ -7989,7 +7986,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
>
>         rcu_read_lock();
>         pd = rcu_dereference(rd->pd);
> -       if (!pd || is_rd_overutilized(rd))
> +       if (!pd)
>                 goto unlock;
>
>         /*
> @@ -8192,7 +8189,7 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int wake_flags)
>                     cpumask_test_cpu(cpu, p->cpus_ptr))
>                         return cpu;
>
> -               if (sched_energy_enabled()) {
> +               if (is_rd_not_overutilized(this_rq()->rd)) {
>                         new_cpu = find_energy_efficient_cpu(p, prev_cpu);
>                         if (new_cpu >= 0)
>                                 return new_cpu;
> @@ -10869,12 +10866,9 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
>         if (busiest->group_type == group_misfit_task)
>                 goto force_balance;
>
> -       if (sched_energy_enabled()) {
> -               struct root_domain *rd = env->dst_rq->rd;
> -
> -               if (rcu_dereference(rd->pd) && !is_rd_overutilized(rd))
> -                       goto out_balanced;
> -       }
> +       if (is_rd_not_overutilized(env->dst_rq->rd) &&
> +           rcu_dereference(env->dst_rq->rd->pd))
> +               goto out_balanced;
>
>         /* ASYM feature bypasses nice load balance check */
>         if (busiest->group_type == group_asym_packing)
> --
> 2.39.3
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ