[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190911222829.28874-3-rcampbell@nvidia.com>
Date: Wed, 11 Sep 2019 15:28:27 -0700
From: Ralph Campbell <rcampbell@...dia.com>
To: <linux-mm@...ck.org>
CC: <linux-kernel@...r.kernel.org>, <amd-gfx@...ts.freedesktop.org>,
<dri-devel@...ts.freedesktop.org>, <nouveau@...ts.freedesktop.org>,
Jérôme Glisse <jglisse@...hat.com>,
Jason Gunthorpe <jgg@...lanox.com>,
Andrew Morton <akpm@...ux-foundation.org>,
"Christoph Hellwig" <hch@....de>,
Ralph Campbell <rcampbell@...dia.com>
Subject: [PATCH 2/4] mm/hmm: allow snapshot of the special zero page
Allow hmm_range_fault() to return success (0) when the CPU pagetable
entry points to the special shared zero page.
The caller can then handle the zero page by possibly clearing device
private memory instead of DMAing a zero page.
Signed-off-by: Ralph Campbell <rcampbell@...dia.com>
Cc: "Jérôme Glisse" <jglisse@...hat.com>
Cc: Jason Gunthorpe <jgg@...lanox.com>
Cc: Christoph Hellwig <hch@....de>
---
mm/hmm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/hmm.c b/mm/hmm.c
index 06041d4399ff..7217912bef13 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -532,7 +532,7 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
return -EBUSY;
} else if (IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL) && pte_special(pte)) {
*pfn = range->values[HMM_PFN_SPECIAL];
- return -EFAULT;
+ return is_zero_pfn(pte_pfn(pte)) ? 0 : -EFAULT;
}
*pfn = hmm_device_entry_from_pfn(range, pte_pfn(pte)) | cpu_flags;
--
2.20.1
Powered by blists - more mailing lists