[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20160111162931.0bea916e.akpm@linux-foundation.org>
Date: Mon, 11 Jan 2016 16:29:31 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Mike Kravetz <mike.kravetz@...cle.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Hugh Dickins <hughd@...gle.com>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
Hillf Danton <hillf.zj@...baba-inc.com>,
Davidlohr Bueso <dave@...olabs.net>,
Dave Hansen <dave.hansen@...ux.intel.com>
Subject: Re: [PATCH] mm/hugetlbfs: Unmap pages if page fault raced with hole
punch
On Mon, 11 Jan 2016 15:38:40 -0800 Mike Kravetz <mike.kravetz@...cle.com> wrote:
> On 01/11/2016 02:35 PM, Andrew Morton wrote:
> > On Wed, 6 Jan 2016 14:37:04 -0800 Mike Kravetz <mike.kravetz@...cle.com> wrote:
> >
> >> Page faults can race with fallocate hole punch. If a page fault happens
> >> between the unmap and remove operations, the page is not removed and
> >> remains within the hole. This is not the desired behavior. The race
> >> is difficult to detect in user level code as even in the non-race
> >> case, a page within the hole could be faulted back in before fallocate
> >> returns. If userfaultfd is expanded to support hugetlbfs in the future,
> >> this race will be easier to observe.
> >>
> >> If this race is detected and a page is mapped, the remove operation
> >> (remove_inode_hugepages) will unmap the page before removing. The unmap
> >> within remove_inode_hugepages occurs with the hugetlb_fault_mutex held
> >> so that no other faults will be processed until the page is removed.
> >>
> >> The (unmodified) routine hugetlb_vmdelete_list was moved ahead of
> >> remove_inode_hugepages to satisfy the new reference.
> >>
> >> ...
> >>
> >> --- a/fs/hugetlbfs/inode.c
> >> +++ b/fs/hugetlbfs/inode.c
> >>
> >> ...
> >>
> >> @@ -395,37 +431,43 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart,
> >> mapping, next, 0);
> >> mutex_lock(&hugetlb_fault_mutex_table[hash]);
> >>
> >> - lock_page(page);
> >> - if (likely(!page_mapped(page))) {
> >
> > hm, what are the locking requirements for page_mapped()?
>
> page_mapped is just reading/evaluating an atomic within the struct page
> which we have a referene on from the pagevec_lookup. But, I think the
> question is what prevents page_mapped from changing after we check it?
>
> The patch takes the hugetlb_fault_mutex_table lock before checking
> page_mapped. If the page is unmapped and the hugetlb_fault_mutex_table
> is held, it can not be faulted in and change from unmapped to mapped.
>
> The new comment in the patch about taking hugetlb_fault_mutex_table is
> right before the check for page_mapped.
OK, thanks.
> >
> >> - bool rsv_on_error = !PagePrivate(page);
> >> - /*
> >> - * We must free the huge page and remove
> >> - * from page cache (remove_huge_page) BEFORE
> >> - * removing the region/reserve map
> >> - * (hugetlb_unreserve_pages). In rare out
> >> - * of memory conditions, removal of the
> >> - * region/reserve map could fail. Before
> >> - * free'ing the page, note PagePrivate which
> >> - * is used in case of error.
> >> - */
> >> - remove_huge_page(page);
> >
> > And remove_huge_page().
>
> The page must be locked before calling remove_huge_page, since it will
> call delete_from_page_cache. It currently is locked. Would you prefer
> a comment stating this before the call?
No, that doesn't seem nevessary.
I'll mark this patch as "pending, awaiting Mike's go-ahead".
Powered by blists - more mailing lists