[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220203150123.GB8034@ziepe.ca>
Date: Thu, 3 Feb 2022 11:01:23 -0400
From: Jason Gunthorpe <jgg@...pe.ca>
To: Jan Kara <jack@...e.cz>
Cc: John Hubbard <jhubbard@...dia.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Peter Xu <peterx@...hat.com>,
David Hildenbrand <david@...hat.com>,
Lukas Bulwahn <lukas.bulwahn@...il.com>,
Claudio Imbrenda <imbrenda@...ux.ibm.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Alex Williamson <alex.williamson@...hat.com>,
Andrea Arcangeli <aarcange@...hat.com>,
LKML <linux-kernel@...r.kernel.org>, linux-mm@...ck.org
Subject: Re: [PATCH v3 2/4] mm/gup: clean up follow_pfn_pte() slightly
On Thu, Feb 03, 2022 at 02:53:52PM +0100, Jan Kara wrote:
> On Thu 03-02-22 01:32:30, John Hubbard wrote:
> > Regardless of any FOLL_* flags, get_user_pages() and its variants should
> > handle PFN-only entries by stopping early, if the caller expected
> > **pages to be filled in.
> >
> > This makes for a more reliable API, as compared to the previous approach
> > of skipping over such entries (and thus leaving them silently
> > unwritten).
> >
> > Cc: Peter Xu <peterx@...hat.com>
> > Cc: Lukas Bulwahn <lukas.bulwahn@...il.com>
> > Suggested-by: Jason Gunthorpe <jgg@...dia.com>
> > Reviewed-by: Jason Gunthorpe <jgg@...dia.com>
> > Signed-off-by: John Hubbard <jhubbard@...dia.com>
> > mm/gup.c | 11 ++++++-----
> > 1 file changed, 6 insertions(+), 5 deletions(-)
> >
> > diff --git a/mm/gup.c b/mm/gup.c
> > index 65575ae3602f..cad3f28492e3 100644
> > +++ b/mm/gup.c
> > @@ -439,10 +439,6 @@ static struct page *no_page_table(struct vm_area_struct *vma,
> > static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
> > pte_t *pte, unsigned int flags)
> > {
> > - /* No page to get reference */
> > - if (flags & (FOLL_GET | FOLL_PIN))
> > - return -EFAULT;
> > -
> > if (flags & FOLL_TOUCH) {
> > pte_t entry = *pte;
> >
>
> This will also modify the error code returned from follow_page().
Er, but isn't that the whole point of this entire design? It is what
the commit that added it says:
commit 1027e4436b6a5c413c95d95e50d0f26348a602ac
Author: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
Date: Fri Sep 4 15:47:55 2015 -0700
mm: make GUP handle pfn mapping unless FOLL_GET is requested
With DAX, pfn mapping becoming more common. The patch adjusts GUP code to
cover pfn mapping for cases when we don't need struct page to proceed.
To make it possible, let's change follow_page() code to return -EEXIST
error code if proper page table entry exists, but no corresponding struct
page. __get_user_page() would ignore the error code and move to the next
page frame.
The immediate effect of the change is working MAP_POPULATE and mlock() on
DAX mappings.
> A quick audit shows that at least the user in mm/migrate.c will
> propagate this error code to userspace and I'm not sure the change
> in error code will not break something... EEXIST is a bit strange
> error code to get from move_pages(2).
That makes sense, maybe move_pages should squash the return codes to
EEXIST?
Jason
Powered by blists - more mailing lists