[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170106171300.GA3804@redhat.com>
Date: Fri, 6 Jan 2017 12:13:01 -0500
From: Jerome Glisse <jglisse@...hat.com>
To: David Nellans <dnellans@...dia.com>
Cc: akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, John Hubbard <jhubbard@...dia.com>,
Evgeny Baskakov <ebaskakov@...dia.com>,
Mark Hairgrove <mhairgrove@...dia.com>,
Sherry Cheung <SCheung@...dia.com>,
Subhash Gutti <sgutti@...dia.com>,
Cameron Buschardt <cabuschardt@...dia.com>,
Zi Yan <zi.yan@...rutgers.edu>,
Anshuman Khandual <khandual@...ux.vnet.ibm.com>
Subject: Re: [HMM v15 13/16] mm/hmm/migrate: new memory migration helper for
use with device memory v2
On Fri, Jan 06, 2017 at 10:46:09AM -0600, David Nellans wrote:
>
>
> On 01/06/2017 10:46 AM, Jérôme Glisse wrote:
> > This patch add a new memory migration helpers, which migrate memory
> > backing a range of virtual address of a process to different memory
> > (which can be allocated through special allocator). It differs from
> > numa migration by working on a range of virtual address and thus by
> > doing migration in chunk that can be large enough to use DMA engine
> > or special copy offloading engine.
> >
> > Expected users are any one with heterogeneous memory where different
> > memory have different characteristics (latency, bandwidth, ...). As
> > an example IBM platform with CAPI bus can make use of this feature
> > to migrate between regular memory and CAPI device memory. New CPU
> > architecture with a pool of high performance memory not manage as
> > cache but presented as regular memory (while being faster and with
> > lower latency than DDR) will also be prime user of this patch.
> Why should the normal page migration path (where neither src nor dest
> are device private), use the hmm_migrate functionality? 11-14 are
> replicating a lot of the normal migration functionality but with special
> casing for HMM requirements.
You are mischaracterizing patch 11-14. Patch 11-12 adds new flags and
modify existing functions so that they can be share. Patch 13 implement
new migration helper while patch 14 optimize this new migration helper.
hmm_migrate() is different from existing migration code because it works
on virtual address range of a process. Existing migration code works
from page. The only difference with existing code is that we collect
pages from virtual address and we allow use of dma engine to perform
copy.
> When migrating THP's or a list of pages (your use case above), normal
> NUMA migration is going to want to do this as fast as possible too (see
> Zi Yan's patches for multi-threading normal migrations & prototype of
> using intel IOAT for transfers, he sees 3-5x speedup).
This is core features of HMM and as such optimization like better THP
support are defer to later patchset.
>
> If the intention is to provide a common interface hook for migration to
> use DMA acceleration (which is a good idea), it probably shouldn't be
> special cased inside HMM functionality. For example, using the intel IOAT
> for migration DMA has nothing to do with HMM whatsoever. We need a normal
> migration path interface to allow DMA that isn't tied to HMM.
There is nothing that ie hmm_migrate() to HMM. If that make you feel better
i can drop the hmm_ prefix but i would need another name than migrate() as
it is already taken. I can probably name it vma_range_dma_migrate() or
something like that.
The only think that is HMM specific in this code is understanding HMM special
page table entry and handling those. Such entry can only be migrated by DMA
and not by memcpy hence why i do not modify existing code to support those.
Cheers,
Jérôme
Powered by blists - more mailing lists