[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190807110147.GT11812@dhcp22.suse.cz>
Date: Wed, 7 Aug 2019 13:01:47 +0200
From: Michal Hocko <mhocko@...nel.org>
To: john.hubbard@...il.com
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Christoph Hellwig <hch@...radead.org>,
Ira Weiny <ira.weiny@...el.com>, Jan Kara <jack@...e.cz>,
Jason Gunthorpe <jgg@...pe.ca>,
Jerome Glisse <jglisse@...hat.com>,
LKML <linux-kernel@...r.kernel.org>, linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org, John Hubbard <jhubbard@...dia.com>,
Dan Williams <dan.j.williams@...el.com>,
Daniel Black <daniel@...ux.ibm.com>,
Matthew Wilcox <willy@...radead.org>,
Mike Kravetz <mike.kravetz@...cle.com>
Subject: Re: [PATCH 1/3] mm/mlock.c: convert put_page() to put_user_page*()
On Mon 05-08-19 15:20:17, john.hubbard@...il.com wrote:
> From: John Hubbard <jhubbard@...dia.com>
>
> For pages that were retained via get_user_pages*(), release those pages
> via the new put_user_page*() routines, instead of via put_page() or
> release_pages().
Hmm, this is an interesting code path. There seems to be a mix of pages
in the game. We get one page via follow_page_mask but then other pages
in the range are filled by __munlock_pagevec_fill and that does a direct
pte walk. Is using put_user_page correct in this case? Could you explain
why in the changelog?
> This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
> ("mm: introduce put_user_page*(), placeholder versions").
>
> Cc: Dan Williams <dan.j.williams@...el.com>
> Cc: Daniel Black <daniel@...ux.ibm.com>
> Cc: Jan Kara <jack@...e.cz>
> Cc: Jérôme Glisse <jglisse@...hat.com>
> Cc: Matthew Wilcox <willy@...radead.org>
> Cc: Mike Kravetz <mike.kravetz@...cle.com>
> Signed-off-by: John Hubbard <jhubbard@...dia.com>
> ---
> mm/mlock.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/mm/mlock.c b/mm/mlock.c
> index a90099da4fb4..b980e6270e8a 100644
> --- a/mm/mlock.c
> +++ b/mm/mlock.c
> @@ -345,7 +345,7 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone)
> get_page(page); /* for putback_lru_page() */
> __munlock_isolated_page(page);
> unlock_page(page);
> - put_page(page); /* from follow_page_mask() */
> + put_user_page(page); /* from follow_page_mask() */
> }
> }
> }
> @@ -467,7 +467,7 @@ void munlock_vma_pages_range(struct vm_area_struct *vma,
> if (page && !IS_ERR(page)) {
> if (PageTransTail(page)) {
> VM_BUG_ON_PAGE(PageMlocked(page), page);
> - put_page(page); /* follow_page_mask() */
> + put_user_page(page); /* follow_page_mask() */
> } else if (PageTransHuge(page)) {
> lock_page(page);
> /*
> @@ -478,7 +478,7 @@ void munlock_vma_pages_range(struct vm_area_struct *vma,
> */
> page_mask = munlock_vma_page(page);
> unlock_page(page);
> - put_page(page); /* follow_page_mask() */
> + put_user_page(page); /* follow_page_mask() */
> } else {
> /*
> * Non-huge pages are handled in batches via
> --
> 2.22.0
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists