[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d5c4a464-1f17-8517-3646-33dd5bf06ef5@nvidia.com>
Date: Fri, 6 Jan 2017 10:46:09 -0600
From: David Nellans <dnellans@...dia.com>
To: Jérôme Glisse <jglisse@...hat.com>,
<akpm@...ux-foundation.org>, <linux-kernel@...r.kernel.org>,
<linux-mm@...ck.org>
CC: John Hubbard <jhubbard@...dia.com>,
Evgeny Baskakov <ebaskakov@...dia.com>,
Mark Hairgrove <mhairgrove@...dia.com>,
Sherry Cheung <SCheung@...dia.com>,
Subhash Gutti <sgutti@...dia.com>,
Cameron Buschardt <cabuschardt@...dia.com>,
Zi Yan <zi.yan@...rutgers.edu>,
Anshuman Khandual <khandual@...ux.vnet.ibm.com>
Subject: Re: [HMM v15 13/16] mm/hmm/migrate: new memory migration helper for
use with device memory v2
On 01/06/2017 10:46 AM, Jérôme Glisse wrote:
> This patch add a new memory migration helpers, which migrate memory
> backing a range of virtual address of a process to different memory
> (which can be allocated through special allocator). It differs from
> numa migration by working on a range of virtual address and thus by
> doing migration in chunk that can be large enough to use DMA engine
> or special copy offloading engine.
>
> Expected users are any one with heterogeneous memory where different
> memory have different characteristics (latency, bandwidth, ...). As
> an example IBM platform with CAPI bus can make use of this feature
> to migrate between regular memory and CAPI device memory. New CPU
> architecture with a pool of high performance memory not manage as
> cache but presented as regular memory (while being faster and with
> lower latency than DDR) will also be prime user of this patch.
Why should the normal page migration path (where neither src nor dest are
device private), use the hmm_migrate functionality? 11-14 are
replicating a lot of the
normal migration functionality but with special casing for HMM
requirements. When migrating
THP's or a list of pages (your use case above), normal NUMA migration
is going to want to do this as fast as possible too (see Zi Yan's
patches for multi-threading normal
migrations & prototype of using intel IOAT for transfers, he sees 3-5x
speedup).
If the intention is to provide a common interface hook for migration to
use DMA acceleration
(which is a good idea), it probably shouldn't be special cased inside
HMM functionality.
For example, using the intel IOAT for migration DMA has nothing to do
with HMM
whatsoever. We need a normal migration path interface to allow DMA that
isn't tied
to HMM.
Powered by blists - more mailing lists