[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200103114011.GB19390@bogus>
Date: Fri, 3 Jan 2020 11:40:11 +0000
From: Sudeep Holla <sudeep.holla@....com>
To: "Zengtao (B)" <prime.zeng@...ilicon.com>
Cc: Valentin Schneider <valentin.schneider@....com>,
Linuxarm <linuxarm@...wei.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
"Rafael J. Wysocki" <rafael@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Morten Rasmussen <morten.rasmussen@....com>,
Sudeep Holla <sudeep.holla@....com>
Subject: Re: [PATCH] cpu-topology: warn if NUMA configurations conflicts with
lower layer
On Fri, Jan 03, 2020 at 04:24:04AM +0000, Zengtao (B) wrote:
> > -----Original Message-----
> > From: Valentin Schneider [mailto:valentin.schneider@....com]
> > Sent: Thursday, January 02, 2020 9:22 PM
> > To: Zengtao (B); Sudeep Holla
> > Cc: Linuxarm; Greg Kroah-Hartman; Rafael J. Wysocki;
> > linux-kernel@...r.kernel.org; Morten Rasmussen
> > Subject: Re: [PATCH] cpu-topology: warn if NUMA configurations conflicts
> > with lower layer
> >
[...]
> >
> > Right, and that is checked when you have sched_debug on the cmdline
> > (or write 1 to /sys/kernel/debug/sched_debug & regenerate the sched
> > domains)
> >
>
> No, here I think you don't get my issue, please try to understand my example
> First:.
>
> *************************************
> NUMA: 0-2, 3-7
> core_siblings: 0-3, 4-7
> *************************************
> When we are building the sched domain, per the current code:
> (1) For core 3
> MC sched domain fallbacks to 3~7
> DIE sched domain is 3~7
> (2) For core 4:
> MC sched domain is 4~7
> DIE sched domain is 3~7
>
> When we are build sched groups for the MC level:
> (1). core3's sched groups chain is built like as: 3->4->5->6->7->3
> (2). core4's sched groups chain is built like as: 4->5->6->7->4
> so after (2),
> core3's sched groups is overlapped, and it's not a chain any more.
> In the afterwards usecase of core3's sched groups, deadloop happens.
>
> And it's difficult for the scheduler to find out such errors,
> that is why I think a warning is necessary here.
>
We can figure out a way to warn if it's absolutely necessary, but I
would like to understand the system topology here. You haven't answered
my query on cache topology. Please give more description on why the
NUMA configuration is like the above example with specific hardware
design details. Is this just a case where user can specify anything
they wish ?
--
Regards,
Sudeep
Powered by blists - more mailing lists