[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190419233536.8080-1-rcampbell@nvidia.com>
Date: Fri, 19 Apr 2019 16:35:36 -0700
From: <rcampbell@...dia.com>
To: <linux-mm@...ck.org>
CC: <linux-kernel@...r.kernel.org>,
Ralph Campbell <rcampbell@...dia.com>,
Jérôme Glisse <jglisse@...hat.com>,
Ira Weiny <ira.weiny@...el.com>,
John Hubbard <jhubbard@...dia.com>,
Dan Williams <dan.j.williams@...el.com>,
Arnd Bergmann <arnd@...db.de>,
Balbir Singh <bsingharora@...il.com>,
Dan Carpenter <dan.carpenter@...cle.com>,
Matthew Wilcox <willy@...radead.org>,
Souptick Joarder <jrdr.linux@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: [RESEND PATCH] mm/hmm: Fix initial PFN for hugetlbfs pages
From: Ralph Campbell <rcampbell@...dia.com>
The mmotm patch [1] adds hugetlbfs support for HMM but the initial
PFN used to fill the HMM range->pfns[] array doesn't properly
compute the starting PFN offset.
This can be tested by running test-hugetlbfs-read from [2].
Fix the PFN offset by adjusting the page offset by the device's
page size.
Andrew, this should probably be squashed into Jerome's patch.
[1] https://marc.info/?l=linux-mm&m=155432003506068&w=2
("mm/hmm: mirror hugetlbfs (snapshoting, faulting and DMA mapping)")
[2] https://gitlab.freedesktop.org/glisse/svm-cl-tests
Signed-off-by: Ralph Campbell <rcampbell@...dia.com>
Cc: Jérôme Glisse <jglisse@...hat.com>
Cc: Ira Weiny <ira.weiny@...el.com>
Cc: John Hubbard <jhubbard@...dia.com>
Cc: Dan Williams <dan.j.williams@...el.com>
Cc: Arnd Bergmann <arnd@...db.de>
Cc: Balbir Singh <bsingharora@...il.com>
Cc: Dan Carpenter <dan.carpenter@...cle.com>
Cc: Matthew Wilcox <willy@...radead.org>
Cc: Souptick Joarder <jrdr.linux@...il.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
---
mm/hmm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/hmm.c b/mm/hmm.c
index def451a56c3e..fcf8e4fb5770 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -868,7 +868,7 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask,
goto unlock;
}
- pfn = pte_pfn(entry) + (start & mask);
+ pfn = pte_pfn(entry) + ((start & mask) >> range->page_shift);
for (; addr < end; addr += size, i++, pfn += pfn_inc)
range->pfns[i] = hmm_device_entry_from_pfn(range, pfn) |
cpu_flags;
--
2.20.1
Powered by blists - more mailing lists