[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190805222019.28592-2-jhubbard@nvidia.com>
Date: Mon, 5 Aug 2019 15:20:17 -0700
From: john.hubbard@...il.com
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Christoph Hellwig <hch@...radead.org>,
Ira Weiny <ira.weiny@...el.com>, Jan Kara <jack@...e.cz>,
Jason Gunthorpe <jgg@...pe.ca>,
Jerome Glisse <jglisse@...hat.com>,
LKML <linux-kernel@...r.kernel.org>, linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org, John Hubbard <jhubbard@...dia.com>,
Dan Williams <dan.j.williams@...el.com>,
Daniel Black <daniel@...ux.ibm.com>,
Matthew Wilcox <willy@...radead.org>,
Mike Kravetz <mike.kravetz@...cle.com>
Subject: [PATCH 1/3] mm/mlock.c: convert put_page() to put_user_page*()
From: John Hubbard <jhubbard@...dia.com>
For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().
This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").
Cc: Dan Williams <dan.j.williams@...el.com>
Cc: Daniel Black <daniel@...ux.ibm.com>
Cc: Jan Kara <jack@...e.cz>
Cc: Jérôme Glisse <jglisse@...hat.com>
Cc: Matthew Wilcox <willy@...radead.org>
Cc: Mike Kravetz <mike.kravetz@...cle.com>
Signed-off-by: John Hubbard <jhubbard@...dia.com>
---
mm/mlock.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/mm/mlock.c b/mm/mlock.c
index a90099da4fb4..b980e6270e8a 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -345,7 +345,7 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone)
get_page(page); /* for putback_lru_page() */
__munlock_isolated_page(page);
unlock_page(page);
- put_page(page); /* from follow_page_mask() */
+ put_user_page(page); /* from follow_page_mask() */
}
}
}
@@ -467,7 +467,7 @@ void munlock_vma_pages_range(struct vm_area_struct *vma,
if (page && !IS_ERR(page)) {
if (PageTransTail(page)) {
VM_BUG_ON_PAGE(PageMlocked(page), page);
- put_page(page); /* follow_page_mask() */
+ put_user_page(page); /* follow_page_mask() */
} else if (PageTransHuge(page)) {
lock_page(page);
/*
@@ -478,7 +478,7 @@ void munlock_vma_pages_range(struct vm_area_struct *vma,
*/
page_mask = munlock_vma_page(page);
unlock_page(page);
- put_page(page); /* follow_page_mask() */
+ put_user_page(page); /* follow_page_mask() */
} else {
/*
* Non-huge pages are handled in batches via
--
2.22.0
Powered by blists - more mailing lists