[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d1dd967c-69e6-f673-0c88-06bb4e234872@nvidia.com>
Date: Fri, 17 Mar 2017 18:32:27 -0700
From: John Hubbard <jhubbard@...dia.com>
To: Jérôme Glisse <jglisse@...hat.com>,
<akpm@...ux-foundation.org>, <linux-kernel@...r.kernel.org>,
<linux-mm@...ck.org>
CC: Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
David Nellans <dnellans@...dia.com>
Subject: Re: [HMM 2/2] hmm: heterogeneous memory management documentation
On 03/17/2017 12:27 PM, Jérôme Glisse wrote:
> This add documentation for HMM (Heterogeneous Memory Management). It
> presents the motivation behind it, the features necessary for it to
> be usefull and and gives an overview of how this is implemented.
For this patch, I will leave it to others to decide how to proceed, given the following:
1. This hmm.txt has a lot of critical information in it.
2. It is, however, more of a first draft than a final draft: lots of errors in each sentence, and
lots of paragraphs that need re-doing, for example. After a quick pass through a few other
Documentation/vm/*.txt documents to gage the quality bar, I am inclined to recommend (or do) a
second draft of this, before submitting it.
Since I'm the one being harsh here (and Jerome, you already know I'm harsh! haha), I can provide a
second draft. But it won't look much like the current draft, so brace yourself before saying yes... :)
thanks
John Hubbard
NVIDIA
>
> Signed-off-by: Jérôme Glisse <jglisse@...hat.com>
> ---
> Documentation/vm/hmm.txt | 362 +++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 362 insertions(+)
> create mode 100644 Documentation/vm/hmm.txt
>
> diff --git a/Documentation/vm/hmm.txt b/Documentation/vm/hmm.txt
> new file mode 100644
> index 0000000..a6829ba
> --- /dev/null
> +++ b/Documentation/vm/hmm.txt
> @@ -0,0 +1,362 @@
> +Heterogeneous Memory Management (HMM)
> +
> +Transparently allow any component of a program to use any memory region of said
> +program with a device without using device specific memory allocator. This is
> +becoming a requirement to simplify the use of advance heterogeneous computing
> +where GPU, DSP or FPGA are use to perform various computations.
> +
> +This document is divided as follow, in the first section i expose the problems
> +related to the use of a device specific allocator. The second section i expose
> +the hardware limitations that are inherent to many platforms. The third section
> +gives an overview of HMM designs. The fourth section explains how CPU page-
> +table mirroring works and what is HMM purpose in this context. Fifth section
> +deals with how device memory is represented inside the kernel. Finaly the last
> +section present the new migration helper that allow to leverage the device DMA
> +engine.
> +
> +
> +-------------------------------------------------------------------------------
> +
> +1) Problems of using device specific memory allocator:
> +
> +Device with large amount of on board memory (several giga bytes) like GPU have
> +historicaly manage their memory through dedicated driver specific API. This
> +creates a disconnect between memory allocated and managed by device driver and
> +regular application memory (private anonynous, share memory or regular file
> +back memory). From here on i will refer to this aspect as split address space.
> +I use share address space to refer to the opposite situation ie one in which
> +any memory region can be use by device transparently.
> +
> +Split address space because device can only access memory allocated through the
> +device specific API. This imply that all memory object in a program are not
> +equal from device point of view which complicate large program that rely on a
> +wide set of libraries.
> +
> +Concretly this means that code that wants to leverage device like GPU need to
> +copy object between genericly allocated memory (malloc, mmap private/share/)
> +and memory allocated through the device driver API (this still end up with an
> +mmap but of the device file).
> +
> +For flat dataset (array, grid, image, ...) this isn't too hard to achieve but
> +complex data-set (list, tree, ...) are hard to get right. Duplicating a complex
> +data-set need to re-map all the pointer relations between each of its elements.
> +This is error prone and program gets harder to debug because of the duplicate
> +data-set.
> +
> +Split address space also means that library can not transparently use data they
> +are getting from core program or other library and thus each library might have
> +to duplicate its input data-set using specific memory allocator. Large project
> +suffer from this and waste resources because of the various memory copy.
> +
> +Duplicating each library API to accept as input or output memory allocted by
> +each device specific allocator is not a viable option. It would lead to a
> +combinatorial explosions in the library entry points.
> +
> +Finaly with the advance of high level langage constructs (in C++ but in other
> +langage too) it is now possible for compiler to leverage GPU or other devices
> +without even the programmer knowledge. Some of compiler identified patterns are
> +only do-able with a share address. It is as well more reasonable to use a share
> +address space for all the other patterns.
> +
> +
> +-------------------------------------------------------------------------------
> +
> +2) System bus, device memory characteristics
> +
> +System bus cripple share address due to few limitations. Most system bus only
> +allow basic memory access from device to main memory, even cache coherency is
> +often optional. Access to device memory from CPU is even more limited, most
> +often than not it is not cache coherent.
> +
> +If we only consider the PCIE bus than device can access main memory (often
> +through an IOMMU) and be cache coherent with the CPUs. However it only allows
> +a limited set of atomic operation from device on main memory. This is worse
> +in the other direction the CPUs can only access a limited range of the device
> +memory and can not perform atomic operations on it. Thus device memory can not
> +be consider like regular memory from kernel point of view.
> +
> +Another crippling factor is the limited bandwidth (~32GBytes/s with PCIE 4.0
> +and 16 lanes). This is 33 times less that fastest GPU memory (1 TBytes/s).
> +The final limitation is latency, access to main memory from the device has an
> +order of magnitude higher latency than when the device access its own memory.
> +
> +Some platform are developing new system bus or additions/modifications to PCIE
> +to address some of those limitations (OpenCAPI, CCIX). They mainly allow two
> +way cache coherency between CPU and device and allow all atomic operations the
> +architecture supports. Saddly not all platform are following this trends and
> +some major architecture are left without hardware solutions to those problems.
> +
> +So for share address space to make sense not only we must allow device to
> +access any memory memory but we must also permit any memory to be migrated to
> +device memory while device is using it (blocking CPU access while it happens).
> +
> +
> +-------------------------------------------------------------------------------
> +
> +3) Share address space and migration
> +
> +HMM intends to provide two main features. First one is to share the address
> +space by duplication the CPU page table into the device page table so same
> +address point to same memory and this for any valid main memory address in
> +the process address space.
> +
> +To achieve this, HMM offer a set of helpers to populate the device page table
> +while keeping track of CPU page table updates. Device page table updates are
> +not as easy as CPU page table updates. To update the device page table you must
> +allow a buffer (or use a pool of pre-allocated buffer) and write GPU specifics
> +commands in it to perform the update (unmap, cache invalidations and flush,
> +...). This can not be done through common code for all device. Hence why HMM
> +provides helpers to factor out everything that can be while leaving the gory
> +details to the device driver.
> +
> +The second mechanism HMM provide is a new kind of ZONE_DEVICE memory that does
> +allow to allocate a struct page for each page of the device memory. Those page
> +are special because the CPU can not map them. They however allow to migrate
> +main memory to device memory using exhisting migration mechanism and everything
> +looks like if page was swap out to disk from CPU point of view. Using a struct
> +page gives the easiest and cleanest integration with existing mm mechanisms.
> +Again here HMM only provide helpers, first to hotplug new ZONE_DEVICE memory
> +for the device memory and second to perform migration. Policy decision of what
> +and when to migrate things is left to the device driver.
> +
> +Note that any CPU access to a device page trigger a page fault and a migration
> +back to main memory ie when a page backing an given address A is migrated from
> +a main memory page to a device page then any CPU acess to address A trigger a
> +page fault and initiate a migration back to main memory.
> +
> +
> +With this two features, HMM not only allow a device to mirror a process address
> +space and keeps both CPU and device page table synchronize, but also allow to
> +leverage device memory by migrating part of data-set that is actively use by a
> +device.
> +
> +
> +-------------------------------------------------------------------------------
> +
> +4) Address space mirroring implementation and API
> +
> +Address space mirroring main objective is to allow to duplicate range of CPU
> +page table into a device page table and HMM helps keeping both synchronize. A
> +device driver that want to mirror a process address space must start with the
> +registration of an hmm_mirror struct:
> +
> + int hmm_mirror_register(struct hmm_mirror *mirror,
> + struct mm_struct *mm);
> + int hmm_mirror_register_locked(struct hmm_mirror *mirror,
> + struct mm_struct *mm);
> +
> +The locked varient is to be use when the driver is already holding the mmap_sem
> +of the mm in write mode. The mirror struct has a set of callback that are use
> +to propagate CPU page table:
> +
> + struct hmm_mirror_ops {
> + /* update() - update virtual address range of memory
> + *
> + * @mirror: pointer to struct hmm_mirror
> + * @update: update's type (turn read only, unmap, ...)
> + * @start: virtual start address of the range to update
> + * @end: virtual end address of the range to update
> + *
> + * This callback is call when the CPU page table is updated, the device
> + * driver must update device page table accordingly to update's action.
> + *
> + * Device driver callback must wait until the device has fully updated
> + * its view for the range. Note we plan to make this asynchronous in
> + * later patches, so that multiple devices can schedule update to their
> + * page tables, and once all device have schedule the update then we
> + * wait for them to propagate.
> + */
> + void (*update)(struct hmm_mirror *mirror,
> + enum hmm_update action,
> + unsigned long start,
> + unsigned long end);
> + };
> +
> +Device driver must perform update to the range following action (turn range
> +read only, or fully unmap, ...). Once driver callback returns the device must
> +be done with the update.
> +
> +
> +When device driver wants to populate a range of virtual address it can use
> +either:
> + int hmm_vma_get_pfns(struct vm_area_struct *vma,
> + struct hmm_range *range,
> + unsigned long start,
> + unsigned long end,
> + hmm_pfn_t *pfns);
> + int hmm_vma_fault(struct vm_area_struct *vma,
> + struct hmm_range *range,
> + unsigned long start,
> + unsigned long end,
> + hmm_pfn_t *pfns,
> + bool write,
> + bool block);
> +
> +First one (hmm_vma_get_pfns()) will only fetch present CPU page table entry and
> +will not trigger a page fault on missing or non present entry. The second one
> +do trigger page fault on missing or read only entry if write parameter is true.
> +Page fault use the generic mm page fault code path just like a CPU page fault.
> +
> +Both function copy CPU page table into their pfns array argument. Each entry in
> +that array correspond to an address in the virtual range. HMM provide a set of
> +flags to help driver identify special CPU page table entries.
> +
> +Locking with the update() callback is the most important aspect the driver must
> +respect in order to keep things properly synchronize. The usage pattern is :
> +
> + int driver_populate_range(...)
> + {
> + struct hmm_range range;
> + ...
> + again:
> + ret = hmm_vma_get_pfns(vma, &range, start, end, pfns);
> + if (ret)
> + return ret;
> + take_lock(driver->update);
> + if (!hmm_vma_range_done(vma, &range)) {
> + release_lock(driver->update);
> + goto again;
> + }
> +
> + // Use pfns array content to update device page table
> +
> + release_lock(driver->update);
> + return 0;
> + }
> +
> +The driver->update lock is the same lock that driver takes inside its update()
> +callback. That lock must be call before hmm_vma_range_done() to avoid any race
> +with a concurrent CPU page table update.
> +
> +HMM implements all this on top of the mmu_notifier API because we wanted to a
> +simpler API and also to be able to perform optimization latter own like doing
> +concurrent device update in multi-devices scenario.
> +
> +HMM also serve as an impedence missmatch between how CPU page table update are
> +done (by CPU write to the page table and TLB flushes) from how device update
> +their own page table. Device update is a multi-step process, first appropriate
> +commands are write to a buffer, then this buffer is schedule for execution on
> +the device. It is only once the device has executed commands in the buffer that
> +the update is done. Creating and scheduling update command buffer can happen
> +concurrently for multiple devices. Waiting for each device to report commands
> +as executed is serialize (there is no point in doing this concurrently).
> +
> +
> +-------------------------------------------------------------------------------
> +
> +5) Represent and manage device memory from core kernel point of view
> +
> +Several differents design were try to support device memory. First one use
> +device specific data structure to keep informations about migrated memory and
> +HMM hooked itself in various place of mm code to handle any access to address
> +that were back by device memory. It turns out that this ended up replicating
> +most of the fields of struct page and also needed many kernel code path to be
> +updated to understand this new kind of memory.
> +
> +Thing is most kernel code path never try to access the memory behind a page
> +but only care about struct page contents. Because of this HMM switchted to
> +directly using struct page for device memory which left most kernel code path
> +un-aware of the difference. We only need to make sure that no one ever try to
> +map those page from the CPU side.
> +
> +HMM provide a set of helpers to register and hotplug device memory as a new
> +region needing struct page. This is offer through a very simple API:
> +
> + struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops,
> + struct device *device,
> + unsigned long size);
> + void hmm_devmem_remove(struct hmm_devmem *devmem);
> +
> +The hmm_devmem_ops is where most of the important things are:
> +
> + struct hmm_devmem_ops {
> + void (*free)(struct hmm_devmem *devmem, struct page *page);
> + int (*fault)(struct hmm_devmem *devmem,
> + struct vm_area_struct *vma,
> + unsigned long addr,
> + struct page *page,
> + unsigned flags,
> + pmd_t *pmdp);
> + };
> +
> +The first callback (free()) happens when the last reference on a device page is
> +drop. This means the device page is now free and no longer use by anyone. The
> +second callback happens whenever CPU try to access a device page which it can
> +not do. This second callback must trigger a migration back to system memory,
> +HMM provides an helper to do just that:
> +
> + int hmm_devmem_fault_range(struct hmm_devmem *devmem,
> + struct vm_area_struct *vma,
> + const struct migrate_vma_ops *ops,
> + unsigned long mentry,
> + unsigned long *src,
> + unsigned long *dst,
> + unsigned long start,
> + unsigned long addr,
> + unsigned long end,
> + void *private);
> +
> +It relies on new migrate_vma() helper which is a generic page migration helper
> +that work on range of virtual address instead of working on individual pages,
> +it also allow to leverage device DMA engine to perform the copy from device to
> +main memory (or in the other direction). The next section goes over this new
> +helper.
> +
> +
> +-------------------------------------------------------------------------------
> +
> +6) Migrate to and from device memory
> +
> +Because CPU can not access device memory, migration must use device DMA engine
> +to perform copy from and to device memory. For this we need a new migration
> +helper:
> +
> + int migrate_vma(const struct migrate_vma_ops *ops,
> + struct vm_area_struct *vma,
> + unsigned long mentries,
> + unsigned long start,
> + unsigned long end,
> + unsigned long *src,
> + unsigned long *dst,
> + void *private);
> +
> +Unlike other migration function it works on a range of virtual address, there
> +is two reasons for that. First device DMA copy has a high setup overhead cost
> +and thus batching multiple pages is needed as otherwise the migration overhead
> +make the whole excersie pointless. The second reason is because driver trigger
> +such migration base on range of address the device is actively accessing.
> +
> +The migrate_vma_ops struct define two callbacks. First one (alloc_and_copy())
> +control destination memory allocation and copy operation. Second one is there
> +to allow device driver to perform cleanup operation after migration.
> +
> + struct migrate_vma_ops {
> + void (*alloc_and_copy)(struct vm_area_struct *vma,
> + const unsigned long *src,
> + unsigned long *dst,
> + unsigned long start,
> + unsigned long end,
> + void *private);
> + void (*finalize_and_map)(struct vm_area_struct *vma,
> + const unsigned long *src,
> + const unsigned long *dst,
> + unsigned long start,
> + unsigned long end,
> + void *private);
> + };
> +
> +It is important to stress that this migration helpers allow for hole in the
> +virtual address range. Some pages in the range might not be migrated for all
> +the usual reasons (page is pin, page is lock, ...). This helper does not fail
> +but just skip over those pages.
> +
> +The alloc_and_copy() might as well decide to not migrate all pages in the
> +range (for reasons under the callback control). For those the callback just
> +have to leave the corresponding dst entry empty.
> +
> +Finaly the migration of the struct page might fails (for file back page) for
> +various reasons (failure to freeze reference, or update page cache, ...). If
> +that happens then the finalize_and_map() can catch any pages that was not
> +migrated. Note those page were still copied to new page and thus we wasted
> +bandwidth but this is considered as a rare event and a price that we are
> +willing to pay to keep all the code simpler.
> --
> 2.4.11
>
Powered by blists - more mailing lists