[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9642114e-3093-cff0-e177-1071b478f27f@nvidia.com>
Date: Tue, 10 Jan 2017 09:30:30 -0600
From: David Nellans <dnellans@...dia.com>
To: Jerome Glisse <jglisse@...hat.com>
CC: <akpm@...ux-foundation.org>, <linux-kernel@...r.kernel.org>,
<linux-mm@...ck.org>, John Hubbard <jhubbard@...dia.com>,
Evgeny Baskakov <ebaskakov@...dia.com>,
Mark Hairgrove <mhairgrove@...dia.com>,
Sherry Cheung <SCheung@...dia.com>,
Subhash Gutti <sgutti@...dia.com>,
Cameron Buschardt <cabuschardt@...dia.com>,
Zi Yan <zi.yan@...rutgers.edu>,
Anshuman Khandual <khandual@...ux.vnet.ibm.com>
Subject: Re: [HMM v15 13/16] mm/hmm/migrate: new memory migration helper for
use with device memory v2
> You are mischaracterizing patch 11-14. Patch 11-12 adds new flags and
> modify existing functions so that they can be share. Patch 13 implement
> new migration helper while patch 14 optimize this new migration helper.
>
> hmm_migrate() is different from existing migration code because it works
> on virtual address range of a process. Existing migration code works
> from page. The only difference with existing code is that we collect
> pages from virtual address and we allow use of dma engine to perform
> copy.
You're right, but why not just introduce a new general migration interface
that works on vma range first, then case all the normal migration paths for
HMM and then DMA? Being able to migrate based on vma range certainly
makes user level control of memory placement/migration less complicated
than
page interfaces.
> There is nothing that ie hmm_migrate() to HMM. If that make you feel better
> i can drop the hmm_ prefix but i would need another name than migrate() as
> it is already taken. I can probably name it vma_range_dma_migrate() or
> something like that.
>
> The only think that is HMM specific in this code is understanding HMM special
> page table entry and handling those. Such entry can only be migrated by DMA
> and not by memcpy hence why i do not modify existing code to support those.
I'd be happier if there was a vma_migrate proposed independently, I
think it would find
users outside the HMM sandbox. In the IBM migration case, they might
want the vma
interface but choose to use CPU based migration rather than this DMA
interface,
It certainly would make testing of the vma_migrate interface easier.
Powered by blists - more mailing lists