[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201117122149.302832162@linuxfoundation.org>
Date: Tue, 17 Nov 2020 14:05:53 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Ira Weiny <ira.weiny@...el.com>,
Jason Gunthorpe <jgg@...dia.com>,
Andrew Morton <akpm@...ux-foundation.org>,
John Hubbard <jhubbard@...dia.com>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>,
Dan Williams <dan.j.williams@...el.com>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: [PATCH 5.9 213/255] mm/gup: use unpin_user_pages() in __gup_longterm_locked()
From: Jason Gunthorpe <jgg@...dia.com>
commit 96e1fac162cc0086c50b2b14062112adb2ba640e upstream.
When FOLL_PIN is passed to __get_user_pages() the page list must be put
back using unpin_user_pages() otherwise the page pin reference persists
in a corrupted state.
There are two places in the unwind of __gup_longterm_locked() that put
the pages back without checking. Normally on error this function would
return the partial page list making this the caller's responsibility,
but in these two cases the caller is not allowed to see these pages at
all.
Fixes: 3faa52c03f44 ("mm/gup: track FOLL_PIN pages")
Reported-by: Ira Weiny <ira.weiny@...el.com>
Signed-off-by: Jason Gunthorpe <jgg@...dia.com>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
Reviewed-by: Ira Weiny <ira.weiny@...el.com>
Reviewed-by: John Hubbard <jhubbard@...dia.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@...ux.ibm.com>
Cc: Dan Williams <dan.j.williams@...el.com>
Cc: <stable@...r.kernel.org>
Link: https://lkml.kernel.org/r/0-v2-3ae7d9d162e2+2a7-gup_cma_fix_jgg@nvidia.com
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
mm/gup.c | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1637,8 +1637,11 @@ check_again:
/*
* drop the above get_user_pages reference.
*/
- for (i = 0; i < nr_pages; i++)
- put_page(pages[i]);
+ if (gup_flags & FOLL_PIN)
+ unpin_user_pages(pages, nr_pages);
+ else
+ for (i = 0; i < nr_pages; i++)
+ put_page(pages[i]);
if (migrate_pages(&cma_page_list, alloc_migration_target, NULL,
(unsigned long)&mtc, MIGRATE_SYNC, MR_CONTIG_RANGE)) {
@@ -1718,8 +1721,11 @@ static long __gup_longterm_locked(struct
goto out;
if (check_dax_vmas(vmas_tmp, rc)) {
- for (i = 0; i < rc; i++)
- put_page(pages[i]);
+ if (gup_flags & FOLL_PIN)
+ unpin_user_pages(pages, rc);
+ else
+ for (i = 0; i < rc; i++)
+ put_page(pages[i]);
rc = -EOPNOTSUPP;
goto out;
}
Powered by blists - more mailing lists