[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190924092551.GK2369@hirez.programming.kicks-ass.net>
Date: Tue, 24 Sep 2019 11:25:51 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Yunsheng Lin <linyunsheng@...wei.com>
Cc: Michal Hocko <mhocko@...nel.org>, catalin.marinas@....com,
will@...nel.org, mingo@...hat.com, bp@...en8.de, rth@...ddle.net,
ink@...assic.park.msu.ru, mattst88@...il.com,
benh@...nel.crashing.org, paulus@...ba.org, mpe@...erman.id.au,
heiko.carstens@...ibm.com, gor@...ux.ibm.com,
borntraeger@...ibm.com, ysato@...rs.sourceforge.jp,
dalias@...c.org, davem@...emloft.net, ralf@...ux-mips.org,
paul.burton@...s.com, jhogan@...nel.org, jiaxun.yang@...goat.com,
chenhc@...ote.com, akpm@...ux-foundation.org, rppt@...ux.ibm.com,
anshuman.khandual@....com, tglx@...utronix.de, cai@....pw,
robin.murphy@....com, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, hpa@...or.com, x86@...nel.org,
dave.hansen@...ux.intel.com, luto@...nel.org, len.brown@...el.com,
axboe@...nel.dk, dledford@...hat.com, jeffrey.t.kirsher@...el.com,
linux-alpha@...r.kernel.org, naveen.n.rao@...ux.vnet.ibm.com,
mwb@...ux.vnet.ibm.com, linuxppc-dev@...ts.ozlabs.org,
linux-s390@...r.kernel.org, linux-sh@...r.kernel.org,
sparclinux@...r.kernel.org, tbogendoerfer@...e.de,
linux-mips@...r.kernel.org, rafael@...nel.org,
gregkh@...uxfoundation.org
Subject: Re: [PATCH v6] numa: make node_to_cpumask_map() NUMA_NO_NODE aware
On Tue, Sep 24, 2019 at 09:29:50AM +0800, Yunsheng Lin wrote:
> On 2019/9/24 4:34, Peter Zijlstra wrote:
> > I'm saying the ACPI standard is wrong. Explain to me how it is
> > physically possible to have a device without NUMA affinity in a NUMA
> > system?
> >
> > 1) The fundamental interconnect is not uniform.
> > 2) The device needs to actually be somewhere.
> >
>
> From what I can see, NUMA_NO_NODE may make sense in the below case:
>
> 1) Theoretically, there would be a device that can access all the memory
> uniformly and can be accessed by all cpus uniformly even in a NUMA system.
> Suppose we have two nodes, and the device just sit in the middle of the
> interconnect between the two nodes.
>
> Even we define a third node solely for the device, we may need to look at
> the node distance to decide the device can be accessed uniformly.
>
> Or we can decide that the device can be accessed uniformly by setting
> it's node to NUMA_NO_NODE.
This is indeed a theoretical case; it doesn't scale. The moment you're
adding multiple sockets or even board interconnects this all goes out
the window.
And in this case, forcing the device to either node is fine.
> 2) For many virtual deivces, such as tun or loopback netdevice, they
> are also accessed uniformly by all cpus.
Not true; the virtual device will sit in memory local to some node.
And as with physical devices, you probably want at least one (virtual)
queue per node.
Powered by blists - more mailing lists