[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YgKX+cmDDA1VO9kX@BLR-5CG11610CF.amd.com>
Date: Tue, 8 Feb 2022 21:49:05 +0530
From: "Gautham R. Shenoy" <gautham.shenoy@....com>
To: Mel Gorman <mgorman@...hsingularity.net>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Valentin Schneider <valentin.schneider@....com>,
Aubrey Li <aubrey.li@...ux.intel.com>,
Barry Song <song.bao.hua@...ilicon.com>,
Mike Galbraith <efault@....de>,
Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
K Prateek Nayak <kprateek.nayak@....com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/2] sched/fair: Adjust the allowed NUMA imbalance when
SD_NUMA spans multiple LLCs
On Tue, Feb 08, 2022 at 09:43:34AM +0000, Mel Gorman wrote:
[..snip..]
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index d201a7052a29..e6cd55951304 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -2242,6 +2242,59 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att
> }
> }
>
> + /*
> + * Calculate an allowed NUMA imbalance such that LLCs do not get
> + * imbalanced.
> + */
> + for_each_cpu(i, cpu_map) {
> + unsigned int imb = 0;
> + unsigned int imb_span = 1;
> +
> + for (sd = *per_cpu_ptr(d.sd, i); sd; sd = sd->parent) {
> + struct sched_domain *child = sd->child;
> +
> + if (!(sd->flags & SD_SHARE_PKG_RESOURCES) && child &&
> + (child->flags & SD_SHARE_PKG_RESOURCES)) {
> + struct sched_domain *top, *top_p;
> + unsigned int nr_llcs;
> +
> + /*
> + * For a single LLC per node, allow an
> + * imbalance up to 25% of the node. This is an
> + * arbitrary cutoff based on SMT-2 to balance
> + * between memory bandwidth and avoiding
> + * premature sharing of HT resources and SMT-4
> + * or SMT-8 *may* benefit from a different
> + * cutoff.
> + *
> + * For multiple LLCs, allow an imbalance
> + * until multiple tasks would share an LLC
> + * on one node while LLCs on another node
> + * remain idle.
> + */
> + nr_llcs = sd->span_weight / child->span_weight;
> + if (nr_llcs == 1)
> + imb = sd->span_weight >> 2;
> + else
> + imb = nr_llcs;
> + sd->imb_numa_nr = imb;
> +
> + /* Set span based on the first NUMA domain. */
> + top = sd;
> + top_p = top->parent;
> + while (top_p && !(top_p->flags & SD_NUMA)) {
> + top = top->parent;
> + top_p = top->parent;
> + }
> + imb_span = top_p ? top_p->span_weight : sd->span_weight;
> + } else {
> + int factor = max(1U, (sd->span_weight / imb_span));
> +
> + sd->imb_numa_nr = imb * factor;
> + }
> + }
> + }
On a 2 Socket Zen3 servers with 64 cores per socket, the imb_numa_nr
works out to be as follows for different Node Per Socket (NPS) modes
NPS = 1:
======
SMT(span = 2) -- > MC (span = 16) --> DIE (span = 128) --> NUMA (span = 256)
Parent of LLC is DIE. nr_llcs = 128/16 = 8. imb = 8.
top_p = NUMA. imb_span = 256.
for NUMA doman, factor = max(1U, 256/256) = 1. Thus sd->imb_numa_nr = 8.
NPS = 2
========
SMT(span=2)--> MC(span=16)--> NODE(span=64)--> NUMA1(span=128)--> NUMA2(span=256)
Parent of LLC = NODE. nr_llcs = 64/16 = 4. imb = 4.
top_p = NUMA1. imb_span = 128.
For NUMA1 domain, factor = 1. sd->imb_numa_nr = 4.
For NUMA2 domain, factor = 2. sd->imb_numa_nr = 8
NPS = 4
========
SMT(span=2)--> MC(span=16)--> NODE(span=32)--> NUMA1(span=128)--> NUMA2(span=256)
Parent of LLC = NODE. nr_llcs = 32/16 = 2. imb = 2.
top_p = NUMA1. imb_span = 128.
For NUMA1 domain, factor = 1. sd->imb_numa_nr = 2.
For NUMA2 domain, factor = 2. sd->imb_numa_nr = 4
The imb_numa_nr looks good for all the NPS modes. Furthermore, running
stream with 16 threads (equal to the number of LLCs in the system)
yields good results on all the NPS modes with this imb_numa_nr.
Reviewed-by: Gautham R. Shenoy <gautham.shenoy@....com>
--
Thanks and Regards
gautham.
Powered by blists - more mailing lists