[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20211203105055.GB3366@techsingularity.net>
Date: Fri, 3 Dec 2021 10:50:55 +0000
From: Mel Gorman <mgorman@...hsingularity.net>
To: Barry Song <21cnbao@...il.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Valentin Schneider <valentin.schneider@....com>,
Aubrey Li <aubrey.li@...ux.intel.com>,
Barry Song <song.bao.hua@...ilicon.com>,
Mike Galbraith <efault@....de>,
Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/2] sched/fair: Adjust the allowed NUMA imbalance when
SD_NUMA spans multiple LLCs
On Fri, Dec 03, 2021 at 09:15:15PM +1300, Barry Song wrote:
> > diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> > index d201a7052a29..fee2930745ab 100644
> > --- a/kernel/sched/topology.c
> > +++ b/kernel/sched/topology.c
> > @@ -2242,6 +2242,26 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att
> > }
> > }
> >
> > + /* Calculate allowed NUMA imbalance */
> > + for_each_cpu(i, cpu_map) {
> > + int imb_numa_nr = 0;
> > +
> > + for (sd = *per_cpu_ptr(d.sd, i); sd; sd = sd->parent) {
> > + struct sched_domain *child = sd->child;
> > +
> > + if (!(sd->flags & SD_SHARE_PKG_RESOURCES) && child &&
> > + (child->flags & SD_SHARE_PKG_RESOURCES)) {
> > + int nr_groups;
> > +
> > + nr_groups = sd->span_weight / child->span_weight;
> > + imb_numa_nr = max(1U, ((child->span_weight) >> 1) /
> > + (nr_groups * num_online_nodes()));
>
> Hi Mel, you used to have 25% * numa_weight if node has only one LLC.
> for a system with 4 numa, In case sd has 2 nodes, child is 1 numa node,
> then nr_groups=2, num_online_nodes()=4, imb_numa_nr will be
> child->span_weight/2/2/4?
>
> Does this patch change the behaviour for machines whose numa equals LLC?
>
Yes, it changes behaviour. Instead of a flat 25%, it takes into account
the number of LLCs per node and the number of nodes overall.
--
Mel Gorman
SUSE Labs
Powered by blists - more mailing lists