lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200109104306.GA10914@e105550-lin.cambridge.arm.com>
Date:   Thu, 9 Jan 2020 10:43:06 +0000
From:   Morten Rasmussen <morten.rasmussen@....com>
To:     "Zengtao (B)" <prime.zeng@...ilicon.com>
Cc:     Sudeep Holla <sudeep.holla@....com>,
        Valentin Schneider <valentin.schneider@....com>,
        Linuxarm <linuxarm@...wei.com>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        "Rafael J. Wysocki" <rafael@...nel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] cpu-topology: warn if NUMA configurations conflicts with
 lower layer

On Mon, Jan 06, 2020 at 01:37:59AM +0000, Zengtao (B) wrote:
> > -----Original Message-----
> > From: Sudeep Holla [mailto:sudeep.holla@....com]
> > Sent: Friday, January 03, 2020 7:40 PM
> > To: Zengtao (B)
> > Cc: Valentin Schneider; Linuxarm; Greg Kroah-Hartman; Rafael J. Wysocki;
> > linux-kernel@...r.kernel.org; Morten Rasmussen; Sudeep Holla
> > Subject: Re: [PATCH] cpu-topology: warn if NUMA configurations conflicts
> > with lower layer
> > 
> > On Fri, Jan 03, 2020 at 04:24:04AM +0000, Zengtao (B) wrote:
> > > > -----Original Message-----
> > > > From: Valentin Schneider [mailto:valentin.schneider@....com]
> > > > Sent: Thursday, January 02, 2020 9:22 PM
> > > > To: Zengtao (B); Sudeep Holla
> > > > Cc: Linuxarm; Greg Kroah-Hartman; Rafael J. Wysocki;
> > > > linux-kernel@...r.kernel.org; Morten Rasmussen
> > > > Subject: Re: [PATCH] cpu-topology: warn if NUMA configurations
> > conflicts
> > > > with lower layer
> > > >
> > 
> > [...]
> > 
> > > >
> > > > Right, and that is checked when you have sched_debug on the cmdline
> > > > (or write 1 to /sys/kernel/debug/sched_debug & regenerate the sched
> > > > domains)
> > > >
> > >
> > > No, here I think you don't get my issue, please try to understand my
> > example
> > > First:.
> > >
> > > *************************************
> > > NUMA:         0-2,  3-7
> > > core_siblings:    0-3,  4-7
> > > *************************************
> > > When we are building the sched domain, per the current code:
> > > (1) For core 3
> > >  MC sched domain fallbacks to 3~7
> > >  DIE sched domain is 3~7
> > > (2) For core 4:
> > >  MC sched domain is 4~7
> > >  DIE sched domain is 3~7
> > >
> > > When we are build sched groups for the MC level:
> > > (1). core3's sched groups chain is built like as: 3->4->5->6->7->3
> > > (2). core4's sched groups chain is built like as: 4->5->6->7->4
> > > so after (2),
> > > core3's sched groups is overlapped, and it's not a chain any more.
> > > In the afterwards usecase of core3's sched groups, deadloop happens.
> > >
> > > And it's difficult for the scheduler to find out such errors,
> > > that is why I think a warning is necessary here.
> > >
> > 
> > We can figure out a way to warn if it's absolutely necessary, but I
> > would like to understand the system topology here. You haven't answered
> > my query on cache topology. Please give more description on why the
> > NUMA configuration is like the above example with specific hardware
> > design details. Is this just a case where user can specify anything
> > they wish ?
> >
> 
> Sorry for the late response, In fact, it's a VM usecase, you can simply 
> understand it as a test case. It's a corner case, but it will hang the kernel,
> that is why I suggest a warning is needed.
> 
> I think we need an sanity check or just simply warning, either in the scheduler
> or arch topology parsing.

IIUC, the problem is that virt can set up a broken topology in some
cases where MPIDR doesn't line up correctly with the defined NUMA nodes.

We could argue that it is a qemu/virt problem, but it would be nice if
we could at least detect it. The proposed patch isn't really the right
solution as it warns on some valid topologies as Sudeep already pointed
out.

It sounds more like we need a mask subset check in the sched_domain
building code, if there isn't already one?

Morten

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ