[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPcyv4iLSiJz6Z7qyjqpo=HUZQy-gAcaG69JytLPPGqOO157sg@mail.gmail.com>
Date: Thu, 15 Nov 2018 09:50:58 -0800
From: Dan Williams <dan.j.williams@...el.com>
To: Keith Busch <keith.busch@...el.com>
Cc: Matthew Wilcox <willy@...radead.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linux ACPI <linux-acpi@...r.kernel.org>,
Linux MM <linux-mm@...ck.org>,
Greg KH <gregkh@...uxfoundation.org>,
"Rafael J. Wysocki" <rafael@...nel.org>,
Dave Hansen <dave.hansen@...el.com>
Subject: Re: [PATCH 1/7] node: Link memory nodes to their compute nodes
On Thu, Nov 15, 2018 at 7:02 AM Keith Busch <keith.busch@...el.com> wrote:
>
> On Thu, Nov 15, 2018 at 05:57:10AM -0800, Matthew Wilcox wrote:
> > On Wed, Nov 14, 2018 at 03:49:14PM -0700, Keith Busch wrote:
> > > Memory-only nodes will often have affinity to a compute node, and
> > > platforms have ways to express that locality relationship.
> > >
> > > A node containing CPUs or other DMA devices that can initiate memory
> > > access are referred to as "memory iniators". A "memory target" is a
> > > node that provides at least one phyiscal address range accessible to a
> > > memory initiator.
> >
> > I think I may be confused here. If there is _no_ link from node X to
> > node Y, does that mean that node X's CPUs cannot access the memory on
> > node Y? In my mind, all nodes can access all memory in the system,
> > just not with uniform bandwidth/latency.
>
> The link is just about which nodes are "local". It's like how nodes have
> a cpulist. Other CPUs not in the node's list can acces that node's memory,
> but the ones in the mask are local, and provide useful optimization hints.
>
> Would a node mask would be prefered to symlinks?
I think that would be more flexible, because the set of initiators
that may have "best" or "local" access to a target may be more than 1.
Powered by blists - more mailing lists