[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <678F3D1BB717D949B966B68EAEB446ED340BEDD6@dggemm526-mbx.china.huawei.com>
Date: Thu, 9 Jan 2020 12:58:44 +0000
From: "Zengtao (B)" <prime.zeng@...ilicon.com>
To: Morten Rasmussen <morten.rasmussen@....com>
CC: Sudeep Holla <sudeep.holla@....com>,
Valentin Schneider <valentin.schneider@....com>,
Linuxarm <linuxarm@...wei.com>,
"Greg Kroah-Hartman" <gregkh@...uxfoundation.org>,
"Rafael J. Wysocki" <rafael@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH] cpu-topology: warn if NUMA configurations conflicts
with lower layer
> -----Original Message-----
> From: Morten Rasmussen [mailto:morten.rasmussen@....com]
> Sent: Thursday, January 09, 2020 6:43 PM
> To: Zengtao (B)
> Cc: Sudeep Holla; Valentin Schneider; Linuxarm; Greg Kroah-Hartman;
> Rafael J. Wysocki; linux-kernel@...r.kernel.org
> Subject: Re: [PATCH] cpu-topology: warn if NUMA configurations conflicts
> with lower layer
>
> On Mon, Jan 06, 2020 at 01:37:59AM +0000, Zengtao (B) wrote:
> > > -----Original Message-----
> > > From: Sudeep Holla [mailto:sudeep.holla@....com]
> > > Sent: Friday, January 03, 2020 7:40 PM
> > > To: Zengtao (B)
> > > Cc: Valentin Schneider; Linuxarm; Greg Kroah-Hartman; Rafael J.
> Wysocki;
> > > linux-kernel@...r.kernel.org; Morten Rasmussen; Sudeep Holla
> > > Subject: Re: [PATCH] cpu-topology: warn if NUMA configurations
> conflicts
> > > with lower layer
> > >
> > > On Fri, Jan 03, 2020 at 04:24:04AM +0000, Zengtao (B) wrote:
> > > > > -----Original Message-----
> > > > > From: Valentin Schneider [mailto:valentin.schneider@....com]
> > > > > Sent: Thursday, January 02, 2020 9:22 PM
> > > > > To: Zengtao (B); Sudeep Holla
> > > > > Cc: Linuxarm; Greg Kroah-Hartman; Rafael J. Wysocki;
> > > > > linux-kernel@...r.kernel.org; Morten Rasmussen
> > > > > Subject: Re: [PATCH] cpu-topology: warn if NUMA configurations
> > > conflicts
> > > > > with lower layer
> > > > >
> > >
> > > [...]
> > >
> > > > >
> > > > > Right, and that is checked when you have sched_debug on the
> cmdline
> > > > > (or write 1 to /sys/kernel/debug/sched_debug & regenerate the
> sched
> > > > > domains)
> > > > >
> > > >
> > > > No, here I think you don't get my issue, please try to understand my
> > > example
> > > > First:.
> > > >
> > > > *************************************
> > > > NUMA: 0-2, 3-7
> > > > core_siblings: 0-3, 4-7
> > > > *************************************
> > > > When we are building the sched domain, per the current code:
> > > > (1) For core 3
> > > > MC sched domain fallbacks to 3~7
> > > > DIE sched domain is 3~7
> > > > (2) For core 4:
> > > > MC sched domain is 4~7
> > > > DIE sched domain is 3~7
> > > >
> > > > When we are build sched groups for the MC level:
> > > > (1). core3's sched groups chain is built like as: 3->4->5->6->7->3
> > > > (2). core4's sched groups chain is built like as: 4->5->6->7->4
> > > > so after (2),
> > > > core3's sched groups is overlapped, and it's not a chain any more.
> > > > In the afterwards usecase of core3's sched groups, deadloop
> happens.
> > > >
> > > > And it's difficult for the scheduler to find out such errors,
> > > > that is why I think a warning is necessary here.
> > > >
> > >
> > > We can figure out a way to warn if it's absolutely necessary, but I
> > > would like to understand the system topology here. You haven't
> answered
> > > my query on cache topology. Please give more description on why the
> > > NUMA configuration is like the above example with specific hardware
> > > design details. Is this just a case where user can specify anything
> > > they wish ?
> > >
> >
> > Sorry for the late response, In fact, it's a VM usecase, you can simply
> > understand it as a test case. It's a corner case, but it will hang the
> kernel,
> > that is why I suggest a warning is needed.
> >
> > I think we need an sanity check or just simply warning, either in the
> scheduler
> > or arch topology parsing.
>
> IIUC, the problem is that virt can set up a broken topology in some
> cases where MPIDR doesn't line up correctly with the defined NUMA
> nodes.
>
> We could argue that it is a qemu/virt problem, but it would be nice if
> we could at least detect it. The proposed patch isn't really the right
> solution as it warns on some valid topologies as Sudeep already pointed
> out.
>
> It sounds more like we need a mask subset check in the sched_domain
> building code, if there isn't already one?
Currently no, it's a bit complex to do the check in the sched_domain building code,
I need to take a think of that.
Suggestion welcomed.
Thanks
Zengtao
>
> Morten
Powered by blists - more mailing lists