[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6960db38-3e40-d58c-c9a0-7e2fe259cac5@amd.com>
Date: Wed, 5 Dec 2018 01:22:41 +0000
From: "Kuehling, Felix" <Felix.Kuehling@....com>
To: Jerome Glisse <jglisse@...hat.com>,
Dave Hansen <dave.hansen@...el.com>
CC: "linux-mm@...ck.org" <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Rafael J . Wysocki" <rafael@...nel.org>,
Matthew Wilcox <willy@...radead.org>,
Ross Zwisler <ross.zwisler@...ux.intel.com>,
Keith Busch <keith.busch@...el.com>,
Dan Williams <dan.j.williams@...el.com>,
Haggai Eran <haggaie@...lanox.com>,
Balbir Singh <bsingharora@...il.com>,
"Aneesh Kumar K . V" <aneesh.kumar@...ux.ibm.com>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
"Yang, Philip" <Philip.Yang@....com>,
"Koenig, Christian" <Christian.Koenig@....com>,
"Blinzer, Paul" <Paul.Blinzer@....com>,
Logan Gunthorpe <logang@...tatee.com>,
John Hubbard <jhubbard@...dia.com>,
Ralph Campbell <rcampbell@...dia.com>,
Michal Hocko <mhocko@...nel.org>,
Jonathan Cameron <jonathan.cameron@...wei.com>,
Mark Hairgrove <mhairgrove@...dia.com>,
Vivek Kini <vkini@...dia.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Dave Airlie <airlied@...hat.com>,
Ben Skeggs <bskeggs@...hat.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Rik van Riel <riel@...riel.com>,
Ben Woodard <woodard@...hat.com>,
"linux-acpi@...r.kernel.org" <linux-acpi@...r.kernel.org>
Subject: Re: [RFC PATCH 00/14] Heterogeneous Memory System (HMS) and hbind()
On 2018-12-04 4:57 p.m., Jerome Glisse wrote:
> On Tue, Dec 04, 2018 at 01:37:56PM -0800, Dave Hansen wrote:
>> Yeah, our NUMA mechanisms are for managing memory that the kernel itself
>> manages in the "normal" allocator and supports a full feature set on.
>> That has a bunch of implications, like that the memory is cache coherent
>> and accessible from everywhere.
>>
>> The HMAT patches only comprehend this "normal" memory, which is why
>> we're extending the existing /sys/devices/system/node infrastructure.
>>
>> This series has a much more aggressive goal, which is comprehending the
>> connections of every memory-target to every memory-initiator, no matter
>> who is managing the memory, who can access it, or what it can be used for.
>>
>> Theoretically, HMS could be used for everything that we're doing with
>> /sys/devices/system/node, as long as it's tied back into the existing
>> NUMA infrastructure _somehow_.
>>
>> Right?
> Fully correct mind if i steal that perfect summary description next time
> i post ? I am so bad at explaining thing :)
>
> Intention is to allow program to do everything they do with mbind() today
> and tomorrow with the HMAT patchset and on top of that to also be able to
> do what they do today through API like OpenCL, ROCm, CUDA ... So it is one
> kernel API to rule them all ;)
As for ROCm, I'm looking forward to using hbind in our own APIs. It will
save us some time and trouble not having to implement all the low-level
policy and tracking of virtual address ranges in our device driver.
Going forward, having a common API to manage the topology and memory
affinity would also enable sane ways of having accelerators and memory
devices from different vendors interact under control of a
topology-aware application.
Disclaimer: I haven't had a chance to review the patches in detail yet.
Got caught up in the documentation and discussion ...
Regards,
Felix
>
> Also at first i intend to special case vma alloc page when they are HMS
> policy, long term i would like to merge code path inside the kernel. But
> i do not want to disrupt existing code path today, i rather grow to that
> organicaly. Step by step. The mbind() would still work un-affected in
> the end just the plumbing would be slightly different.
>
> Cheers,
> Jérôme
Powered by blists - more mailing lists