[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190924112811.GK2332@hirez.programming.kicks-ass.net>
Date: Tue, 24 Sep 2019 13:28:11 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Yunsheng Lin <linyunsheng@...wei.com>
Cc: Michal Hocko <mhocko@...nel.org>, catalin.marinas@....com,
will@...nel.org, mingo@...hat.com, bp@...en8.de, rth@...ddle.net,
ink@...assic.park.msu.ru, mattst88@...il.com,
benh@...nel.crashing.org, paulus@...ba.org, mpe@...erman.id.au,
heiko.carstens@...ibm.com, gor@...ux.ibm.com,
borntraeger@...ibm.com, ysato@...rs.sourceforge.jp,
dalias@...c.org, davem@...emloft.net, ralf@...ux-mips.org,
paul.burton@...s.com, jhogan@...nel.org, jiaxun.yang@...goat.com,
chenhc@...ote.com, akpm@...ux-foundation.org, rppt@...ux.ibm.com,
anshuman.khandual@....com, tglx@...utronix.de, cai@....pw,
robin.murphy@....com, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, hpa@...or.com, x86@...nel.org,
dave.hansen@...ux.intel.com, luto@...nel.org, len.brown@...el.com,
axboe@...nel.dk, dledford@...hat.com, jeffrey.t.kirsher@...el.com,
linux-alpha@...r.kernel.org, naveen.n.rao@...ux.vnet.ibm.com,
mwb@...ux.vnet.ibm.com, linuxppc-dev@...ts.ozlabs.org,
linux-s390@...r.kernel.org, linux-sh@...r.kernel.org,
sparclinux@...r.kernel.org, tbogendoerfer@...e.de,
linux-mips@...r.kernel.org, rafael@...nel.org,
gregkh@...uxfoundation.org
Subject: Re: [PATCH v6] numa: make node_to_cpumask_map() NUMA_NO_NODE aware
On Tue, Sep 24, 2019 at 07:07:36PM +0800, Yunsheng Lin wrote:
> On 2019/9/24 17:25, Peter Zijlstra wrote:
> > On Tue, Sep 24, 2019 at 09:29:50AM +0800, Yunsheng Lin wrote:
> >> On 2019/9/24 4:34, Peter Zijlstra wrote:
> >
> >>> I'm saying the ACPI standard is wrong. Explain to me how it is
> >>> physically possible to have a device without NUMA affinity in a NUMA
> >>> system?
> >>>
> >>> 1) The fundamental interconnect is not uniform.
> >>> 2) The device needs to actually be somewhere.
> >>>
> >>
> >> From what I can see, NUMA_NO_NODE may make sense in the below case:
> >>
> >> 1) Theoretically, there would be a device that can access all the memory
> >> uniformly and can be accessed by all cpus uniformly even in a NUMA system.
> >> Suppose we have two nodes, and the device just sit in the middle of the
> >> interconnect between the two nodes.
> >>
> >> Even we define a third node solely for the device, we may need to look at
> >> the node distance to decide the device can be accessed uniformly.
> >>
> >> Or we can decide that the device can be accessed uniformly by setting
> >> it's node to NUMA_NO_NODE.
> >
> > This is indeed a theoretical case; it doesn't scale. The moment you're
> > adding multiple sockets or even board interconnects this all goes out
> > the window.
> >
> > And in this case, forcing the device to either node is fine.
>
> Not really.
> For packet sending and receiving, the buffer memory may be allocated
> dynamically. Node of tx buffer memory is mainly based on the cpu
> that is sending sending, node of rx buffer memory is mainly based on
> the cpu the interrupt handler of the device is running on, and the
> device' interrupt affinity is mainly based on node id of the device.
>
> We can bind the processes that are using the device to both nodes
> in order to utilize memory on both nodes for packet sending.
>
> But for packet receiving, the node1 may not be used becuase the node
> id of device is forced to node 0, which is the default way to bind
> the interrupt to the cpu of the same node.
>
> If node_to_cpumask_map() returns all usable cpus when the device's node
> id is NUMA_NO_NODE, then interrupt can be binded to the cpus on both nodes.
s/binded/bound/
Sure; the data can be allocated wherever, but the control structures are
not dynamically allocated every time. They are persistent, and they will
be local to some node.
Anyway, are you saying this stupid corner case is actually relevant?
Because how does it scale out? What if you have 8 sockets, with each
socket having 2 nodes and 1 such magic device. Then returning all CPUs
is just plain wrong.
> >> 2) For many virtual deivces, such as tun or loopback netdevice, they
> >> are also accessed uniformly by all cpus.
> >
> > Not true; the virtual device will sit in memory local to some node.
> >
> > And as with physical devices, you probably want at least one (virtual)
> > queue per node.
>
> There may be similar handling as above for virtual device too.
And it'd be similarly broken.
Powered by blists - more mailing lists