lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtDHJxtczRCATGJfuHHuQy9NbpXZAsPL1R9Qf=Jd46TU-A@mail.gmail.com>
Date:   Fri, 17 Sep 2021 17:26:01 +0200
From:   Vincent Guittot <vincent.guittot@...aro.org>
To:     Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>
Cc:     "Peter Zijlstra (Intel)" <peterz@...radead.org>,
        Ingo Molnar <mingo@...nel.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
        Nicholas Piggin <npiggin@...il.com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Len Brown <len.brown@...el.com>,
        Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>,
        Tim Chen <tim.c.chen@...ux.intel.com>,
        Aubrey Li <aubrey.li@...ux.intel.com>,
        "Ravi V. Shankar" <ravi.v.shankar@...el.com>,
        Ricardo Neri <ricardo.neri@...el.com>,
        Quentin Perret <qperret@...gle.com>,
        "Joel Fernandes (Google)" <joel@...lfernandes.org>,
        linuxppc-dev@...ts.ozlabs.org,
        linux-kernel <linux-kernel@...r.kernel.org>,
        Aubrey Li <aubrey.li@...el.com>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        "Rafael J . Wysocki" <rafael.j.wysocki@...el.com>
Subject: Re: [PATCH v5 2/6] sched/topology: Introduce sched_group::flags

On Sat, 11 Sept 2021 at 03:19, Ricardo Neri
<ricardo.neri-calderon@...ux.intel.com> wrote:
>
> There exist situations in which the load balance needs to know the
> properties of the CPUs in a scheduling group. When using asymmetric
> packing, for instance, the load balancer needs to know not only the
> state of dst_cpu but also of its SMT siblings, if any.
>
> Use the flags of the child scheduling domains to initialize scheduling
> group flags. This will reflect the properties of the CPUs in the
> group.
>
> A subsequent changeset will make use of these new flags. No functional
> changes are introduced.
>
> Cc: Aubrey Li <aubrey.li@...el.com>
> Cc: Ben Segall <bsegall@...gle.com>
> Cc: Daniel Bristot de Oliveira <bristot@...hat.com>
> Cc: Dietmar Eggemann <dietmar.eggemann@....com>
> Cc: Mel Gorman <mgorman@...e.de>
> Cc: Quentin Perret <qperret@...gle.com>
> Cc: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
> Cc: Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>
> Cc: Steven Rostedt <rostedt@...dmis.org>
> Cc: Tim Chen <tim.c.chen@...ux.intel.com>
> Reviewed-by: Joel Fernandes (Google) <joel@...lfernandes.org>
> Reviewed-by: Len Brown <len.brown@...el.com>
> Originally-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> Signed-off-by: Ricardo Neri <ricardo.neri-calderon@...ux.intel.com>

Reviewed-by: Vincent Guittot <vincent.guittot@...aro.org>

> ---
> Changes since v4:
>   * None
>
> Changes since v3:
>   * Clear the flags of the scheduling groups of a domain if its child is
>     destroyed.
>   * Minor rewording of the commit message.
>
> Changes since v2:
>   * Introduced this patch.
>
> Changes since v1:
>   * N/A
> ---
>  kernel/sched/sched.h    |  1 +
>  kernel/sched/topology.c | 21 ++++++++++++++++++---
>  2 files changed, 19 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 3d3e5793e117..86ab33ce529d 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -1809,6 +1809,7 @@ struct sched_group {
>         unsigned int            group_weight;
>         struct sched_group_capacity *sgc;
>         int                     asym_prefer_cpu;        /* CPU of highest priority in group */
> +       int                     flags;
>
>         /*
>          * The CPUs this group covers.
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index 4e8698e62f07..c56faae461d9 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -716,8 +716,20 @@ cpu_attach_domain(struct sched_domain *sd, struct root_domain *rd, int cpu)
>                 tmp = sd;
>                 sd = sd->parent;
>                 destroy_sched_domain(tmp);
> -               if (sd)
> +               if (sd) {
> +                       struct sched_group *sg = sd->groups;
> +
> +                       /*
> +                        * sched groups hold the flags of the child sched
> +                        * domain for convenience. Clear such flags since
> +                        * the child is being destroyed.
> +                        */
> +                       do {
> +                               sg->flags = 0;
> +                       } while (sg != sd->groups);
> +
>                         sd->child = NULL;
> +               }
>         }
>
>         for (tmp = sd; tmp; tmp = tmp->parent)
> @@ -916,10 +928,12 @@ build_group_from_child_sched_domain(struct sched_domain *sd, int cpu)
>                 return NULL;
>
>         sg_span = sched_group_span(sg);
> -       if (sd->child)
> +       if (sd->child) {
>                 cpumask_copy(sg_span, sched_domain_span(sd->child));
> -       else
> +               sg->flags = sd->child->flags;
> +       } else {
>                 cpumask_copy(sg_span, sched_domain_span(sd));
> +       }
>
>         atomic_inc(&sg->ref);
>         return sg;
> @@ -1169,6 +1183,7 @@ static struct sched_group *get_group(int cpu, struct sd_data *sdd)
>         if (child) {
>                 cpumask_copy(sched_group_span(sg), sched_domain_span(child));
>                 cpumask_copy(group_balance_mask(sg), sched_group_span(sg));
> +               sg->flags = child->flags;
>         } else {
>                 cpumask_set_cpu(cpu, sched_group_span(sg));
>                 cpumask_set_cpu(cpu, group_balance_mask(sg));
> --
> 2.17.1
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ