[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <678F3D1BB717D949B966B68EAEB446ED340B3203@dggemm526-mbx.china.huawei.com>
Date: Mon, 6 Jan 2020 01:48:54 +0000
From: "Zengtao (B)" <prime.zeng@...ilicon.com>
To: Dietmar Eggemann <dietmar.eggemann@....com>,
Valentin Schneider <valentin.schneider@....com>,
Sudeep Holla <sudeep.holla@....com>
CC: Linuxarm <linuxarm@...wei.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
"Rafael J. Wysocki" <rafael@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Morten Rasmussen" <morten.rasmussen@....com>
Subject: RE: [PATCH] cpu-topology: warn if NUMA configurations conflicts
with lower layer
> -----Original Message-----
> From: Dietmar Eggemann [mailto:dietmar.eggemann@....com]
> Sent: Saturday, January 04, 2020 1:21 AM
> To: Valentin Schneider; Zengtao (B); Sudeep Holla
> Cc: Linuxarm; Greg Kroah-Hartman; Rafael J. Wysocki;
> linux-kernel@...r.kernel.org; Morten Rasmussen
> Subject: Re: [PATCH] cpu-topology: warn if NUMA configurations conflicts
> with lower layer
>
> On 03/01/2020 13:14, Valentin Schneider wrote:
> > On 03/01/2020 10:57, Valentin Schneider wrote:
> >> I'm juggling with other things atm, but let me have a think and see if we
> >> couldn't detect that in the scheduler itself.
>
> If this is a common problem, we should detect it in the scheduler rather
> than in
> the arch code.
>
> > Something like this ought to catch your case; might need to compare
> group
> > spans rather than pure group pointers.
> >
> > ---
> > diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> > index 6ec1e595b1d4..c4151e11afcd 100644
> > --- a/kernel/sched/topology.c
> > +++ b/kernel/sched/topology.c
> > @@ -1120,6 +1120,13 @@ build_sched_groups(struct sched_domain
> *sd, int cpu)
> >
> > sg = get_group(i, sdd);
> >
> > + /* sg's are inited as self-looping. If 'last' is not self
> > + * looping, we set it in a previous visit. No further visit
> > + * should change the link order, if we do then the topology
> > + * description is terminally broken.
> > + */
> > + BUG_ON(last && last->next != last && last->next != sg);
> > +
> > cpumask_or(covered, covered, sched_group_span(sg));
> >
> > if (!first)
> >
>
> Still don't see the actual problem case. The closest I came is:
>
> qemu-system-aarch64 -kernel ... -append ' ... loglevel=8 sched_debug'
> -smp cores=4,sockets=2 ... -numa node,cpus=0-2,nodeid=0
> -numa node,cpus=3-7,nodeid=1
>
It's related to the HW topology, if you hw have got 2 clusters 0~3, 4~7,
with the mainline qemu, you will see the issue.
I think you can manually modify the MPIDR parsing to reproduce the
issue.
Linux will use the MPIDR to guess the MC topology since currently qemu
don't provide it.
Refer to: https://patchwork.ozlabs.org/cover/939301/
> but this behaves sane. Since DIE and NUMA have the same span, the
> former degenerates.
>
> [ 0.654451] CPU0 attaching sched-domain(s):
> [ 0.654483] domain-0: span=0-2 level=MC
> [ 0.654635] groups: 0:{ span=0 cap=1008 }, 1:{ span=1 cap=1015 },
> 2:{ span=2 cap=1014 }
> [ 0.654787] domain-1: span=0-7 level=NUMA
> [ 0.654805] groups: 0:{ span=0-2 cap=3037 }, 3:{ span=3-7
> cap=5048 }
> [ 0.655326] CPU1 attaching sched-domain(s):
> [ 0.655339] domain-0: span=0-2 level=MC
> [ 0.655356] groups: 1:{ span=1 cap=1015 }, 2:{ span=2 cap=1014 },
> 0:{ span=0 cap=1008 }
> [ 0.655391] domain-1: span=0-7 level=NUMA
> [ 0.655407] groups: 0:{ span=0-2 cap=3037 }, 3:{ span=3-7
> cap=5048 }
> [ 0.655480] CPU2 attaching sched-domain(s):
> [ 0.655492] domain-0: span=0-2 level=MC
> [ 0.655507] groups: 2:{ span=2 cap=1014 }, 0:{ span=0 cap=1008 },
> 1:{ span=1 cap=1015 }
> [ 0.655541] domain-1: span=0-7 level=NUMA
> [ 0.655556] groups: 0:{ span=0-2 cap=3037 }, 3:{ span=3-7
> cap=5048 }
> [ 0.655603] CPU3 attaching sched-domain(s):
> [ 0.655614] domain-0: span=3-7 level=MC
> [ 0.655628] groups: 3:{ span=3 cap=984 }, 4:{ span=4 cap=1015 },
> 5:{ span=5 cap=1016 }, 6:{ span=6 cap=1016 }, 7:{ span=7 cap=1017 }
> [ 0.655693] domain-1: span=0-7 level=NUMA
> [ 0.655721] groups: 3:{ span=3-7 cap=5048 }, 0:{ span=0-2
> cap=3037 }
> [ 0.655769] CPU4 attaching sched-domain(s):
> [ 0.655780] domain-0: span=3-7 level=MC
> [ 0.655795] groups: 4:{ span=4 cap=1015 }, 5:{ span=5 cap=1016 },
> 6:{ span=6 cap=1016 }, 7:{ span=7 cap=1017 }, 3:{ span=3 cap=984 }
> [ 0.655841] domain-1: span=0-7 level=NUMA
> [ 0.655855] groups: 3:{ span=3-7 cap=5048 }, 0:{ span=0-2
> cap=3037 }
> [ 0.655902] CPU5 attaching sched-domain(s):
> [ 0.655916] domain-0: span=3-7 level=MC
> [ 0.655930] groups: 5:{ span=5 cap=1016 }, 6:{ span=6 cap=1016 },
> 7:{ span=7 cap=1017 }, 3:{ span=3 cap=984 }, 4:{ span=4 cap=1015 }
> [ 0.656545] domain-1: span=0-7 level=NUMA
> [ 0.656562] groups: 3:{ span=3-7 cap=5048 }, 0:{ span=0-2
> cap=3037 }
> [ 0.656775] CPU6 attaching sched-domain(s):
> [ 0.656796] domain-0: span=3-7 level=MC
> [ 0.656835] groups: 6:{ span=6 cap=1016 }, 7:{ span=7 cap=1017 },
> 3:{ span=3 cap=984 }, 4:{ span=4 cap=1015 }, 5:{ span=5 cap=1016 }
> [ 0.656881] domain-1: span=0-7 level=NUMA
> [ 0.656911] groups: 3:{ span=3-7 cap=5048 }, 0:{ span=0-2
> cap=3037 }
> [ 0.657102] CPU7 attaching sched-domain(s):
> [ 0.657113] domain-0: span=3-7 level=MC
> [ 0.657128] groups: 7:{ span=7 cap=1017 }, 3:{ span=3 cap=984 },
> 4:{ span=4 cap=1015 }, 5:{ span=5 cap=1016 }, 6:{ span=6 cap=1016 }
> [ 0.657172] domain-1: span=0-7 level=NUMA
> [ 0.657186] groups: 3:{ span=3-7 cap=5048 }, 0:{ span=0-2
> cap=3037 }
> [ 0.657241] root domain span: 0-7 (max cpu_capacity = 1024)
Powered by blists - more mailing lists