[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPcyv4iWPG9wVqe1GW+Ewk4rqELZB6SRR=sF0G8NaabUu2jH_w@mail.gmail.com>
Date: Fri, 8 Sep 2017 13:43:05 -0700
From: Dan Williams <dan.j.williams@...el.com>
To: Bob Liu <liubo95@...wei.com>
Cc: Jerome Glisse <jglisse@...hat.com>,
Ross Zwisler <ross.zwisler@...ux.intel.com>,
Andrew Morton <akpm@...ux-foundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Linux MM <linux-mm@...ck.org>,
John Hubbard <jhubbard@...dia.com>,
David Nellans <dnellans@...dia.com>,
Balbir Singh <bsingharora@...il.com>,
majiuyue <majiuyue@...wei.com>,
"xieyisheng (A)" <xieyisheng1@...wei.com>
Subject: Re: [HMM-v25 19/19] mm/hmm: add new helper to hotplug CDM memory
region v3
On Thu, Sep 7, 2017 at 6:59 PM, Bob Liu <liubo95@...wei.com> wrote:
> On 2017/9/8 1:27, Jerome Glisse wrote:
[..]
>> No this are 2 orthogonal thing, they do not conflict with each others quite
>> the contrary. HMM (the CDM part is no different) is a set of helpers, see
>> it as a toolbox, for device driver.
>>
>> HMAT is a way for firmware to report memory resources with more informations
>> that just range of physical address. HMAT is specific to platform that rely
>> on ACPI. HMAT does not provide any helpers to manage these memory.
>>
>> So a device driver can get informations about device memory from HMAT and then
>> use HMM to help in managing and using this memory.
>>
>
> Yes, but as Balbir mentioned requires :
> 1. Don't online the memory as a NUMA node
> 2. Use the HMM-CDM API's to map the memory to ZONE DEVICE via the driver
>
> And I'm not sure whether Intel going to use this HMM-CDM based method for their "target domain" memory ?
> Or they prefer to NUMA approach? Ross? Dan?
The starting / strawman proposal for performance differentiated memory
ranges is to get platform firmware to mark them reserved by default.
Then, after we parse the HMAT, make them available via the device-dax
mechanism so that applications that need 100% guaranteed access to
these potentially high-value / limited-capacity ranges can be sure to
get them by default, i.e. before any random kernel objects are placed
in them. Otherwise, if there are no dedicated users for the memory
ranges via device-dax, or they don't need the total capacity, we want
to hotplug that memory into the general purpose memory allocator with
a numa node number so typical numactl and memory-management flows are
enabled.
Ideally this would not be specific to HMAT and any agent that knows
differentiated performance characteristics of a memory range could use
this scheme.
Powered by blists - more mailing lists