[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220131051752.447699-3-jhubbard@nvidia.com>
Date: Sun, 30 Jan 2022 21:17:50 -0800
From: John Hubbard <jhubbard@...dia.com>
To: Andrew Morton <akpm@...ux-foundation.org>,
Peter Xu <peterx@...hat.com>, Jason Gunthorpe <jgg@...pe.ca>
CC: Jan Kara <jack@...e.cz>, Claudio Imbrenda <imbrenda@...ux.ibm.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Alex Williamson <alex.williamson@...hat.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Jérôme Glisse <jglisse@...hat.com>,
LKML <linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>,
John Hubbard <jhubbard@...dia.com>
Subject: [PATCH 2/4] mm/gup: clean up follow_pfn_pte() slightly
Regardless of any FOLL_* flags, get_user_pages() and its variants should
handle PFN-only entries by stopping early, if the caller expected
**pages to be filled in.
This makes for a more reliable API, as compared to the previous approach
of skipping over such entries (and thus leaving them silently
unwritten).
Cc: Peter Xu <peterx@...hat.com>
Suggested-by: Jason Gunthorpe <jgg@...pe.ca>
Signed-off-by: John Hubbard <jhubbard@...dia.com>
---
mm/gup.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/mm/gup.c b/mm/gup.c
index 65575ae3602f..8633bca12eab 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -439,10 +439,6 @@ static struct page *no_page_table(struct vm_area_struct *vma,
static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
pte_t *pte, unsigned int flags)
{
- /* No page to get reference */
- if (flags & (FOLL_GET | FOLL_PIN))
- return -EFAULT;
-
if (flags & FOLL_TOUCH) {
pte_t entry = *pte;
@@ -1180,8 +1176,14 @@ static long __get_user_pages(struct mm_struct *mm,
} else if (PTR_ERR(page) == -EEXIST) {
/*
* Proper page table entry exists, but no corresponding
- * struct page.
+ * struct page. If the caller expects **pages to be
+ * filled in, bail out now, because that can't be done
+ * for this page.
*/
+ if (pages) {
+ page = ERR_PTR(-EFAULT);
+ goto out;
+ }
goto next_page;
} else if (IS_ERR(page)) {
ret = PTR_ERR(page);
--
2.35.0
Powered by blists - more mailing lists