[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170706230803.GE2919@redhat.com>
Date: Thu, 6 Jul 2017 19:08:04 -0400
From: Jerome Glisse <jglisse@...hat.com>
To: Ross Zwisler <ross.zwisler@...ux.intel.com>
Cc: linux-kernel@...r.kernel.org,
"Anaczkowski, Lukasz" <lukasz.anaczkowski@...el.com>,
"Box, David E" <david.e.box@...el.com>,
"Kogut, Jaroslaw" <Jaroslaw.Kogut@...el.com>,
"Lahtinen, Joonas" <joonas.lahtinen@...el.com>,
"Moore, Robert" <robert.moore@...el.com>,
"Nachimuthu, Murugasamy" <murugasamy.nachimuthu@...el.com>,
"Odzioba, Lukasz" <lukasz.odzioba@...el.com>,
"Rafael J. Wysocki" <rafael.j.wysocki@...el.com>,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
"Schmauss, Erik" <erik.schmauss@...el.com>,
"Verma, Vishal L" <vishal.l.verma@...el.com>,
"Zheng, Lv" <lv.zheng@...el.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Dan Williams <dan.j.williams@...el.com>,
Dave Hansen <dave.hansen@...el.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Len Brown <lenb@...nel.org>,
Tim Chen <tim.c.chen@...ux.intel.com>, devel@...ica.org,
linux-acpi@...r.kernel.org, linux-mm@...ck.org,
linux-nvdimm@...ts.01.org
Subject: Re: [RFC v2 0/5] surface heterogeneous memory performance information
On Thu, Jul 06, 2017 at 03:52:28PM -0600, Ross Zwisler wrote:
[...]
>
> ==== Next steps ====
>
> There is still a lot of work to be done on this series, but the overall
> goal of this RFC is to gather feedback on which of the two options we
> should pursue, or whether some third option is preferred. After that is
> done and we have a solid direction we can add support for ACPI hot add,
> test more complex configurations, etc.
>
> So, for applications that need to differentiate between memory ranges based
> on their performance, what option would work best for you? Is the local
> (initiator,target) performance provided by patch 5 enough, or do you
> require performance information for all possible (initiator,target)
> pairings?
Am i right in assuming that HBM or any faster memory will be relatively small
(1GB - 8GB maybe 16GB ?) and of fix amount (ie size will depend on the exact
CPU model you have) ?
If so i am wondering if we should not restrict NUMA placement policy for such
node to vma only. Forbid any policy that would prefer those node globally at
thread/process level. This would avoid wide thread policy to exhaust this
smaller pool of memory.
Drawback of doing so would be that existing applications would not benefit
from it. So workload where is acceptable to exhaust such memory wouldn't
benefit until their application are updated.
This is definitly not something impacting this patchset. I am just thinking
about this at large and i believe that NUMA might need to evolve slightly
to better handle memory hierarchy.
Cheers,
Jérôme
Powered by blists - more mailing lists