[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <b6d50644-7d0c-2c1e-2781-2c6cc81ddc80@linux.ibm.com>
Date: Mon, 17 Dec 2018 16:04:15 +0530
From: "Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>
To: Mike Kravetz <mike.kravetz@...cle.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Cc: Michal Hocko <mhocko@...nel.org>, Hugh Dickins <hughd@...gle.com>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
"Aneesh Kumar K . V" <aneesh.kumar@...ux.vnet.ibm.com>,
Andrea Arcangeli <aarcange@...hat.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Davidlohr Bueso <dave@...olabs.net>,
Prakash Sangappa <prakash.sangappa@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
stable@...r.kernel.org
Subject: Re: [PATCH 3/3] hugetlbfs: remove unnecessary code after i_mmap_rwsem
synchronization
On 12/4/18 1:38 AM, Mike Kravetz wrote:
> After expanding i_mmap_rwsem use for better shared pmd and page fault/
> truncation synchronization, remove code that is no longer necessary.
>
> Cc: <stable@...r.kernel.org>
> Fixes: ebed4bfc8da8 ("hugetlb: fix absurd HugePages_Rsvd")
> Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
> ---
> fs/hugetlbfs/inode.c | 46 +++++++++++++++-----------------------------
> mm/hugetlb.c | 21 ++++++++++----------
> 2 files changed, 25 insertions(+), 42 deletions(-)
>
> diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
> index 3244147fc42b..a9c00c6ef80d 100644
> --- a/fs/hugetlbfs/inode.c
> +++ b/fs/hugetlbfs/inode.c
> @@ -383,17 +383,16 @@ hugetlb_vmdelete_list(struct rb_root_cached *root, pgoff_t start, pgoff_t end)
> * truncation is indicated by end of range being LLONG_MAX
> * In this case, we first scan the range and release found pages.
> * After releasing pages, hugetlb_unreserve_pages cleans up region/reserv
> - * maps and global counts. Page faults can not race with truncation
> - * in this routine. hugetlb_no_page() prevents page faults in the
> - * truncated range. It checks i_size before allocation, and again after
> - * with the page table lock for the page held. The same lock must be
> - * acquired to unmap a page.
> + * maps and global counts.
> * hole punch is indicated if end is not LLONG_MAX
> * In the hole punch case we scan the range and release found pages.
> * Only when releasing a page is the associated region/reserv map
> * deleted. The region/reserv map for ranges without associated
> - * pages are not modified. Page faults can race with hole punch.
> - * This is indicated if we find a mapped page.
> + * pages are not modified.
> + *
> + * Callers of this routine must hold the i_mmap_rwsem in write mode to prevent
> + * races with page faults.
Should this patch be merged to the previous one? Because the changes to
callers are done in the previous patch.
> + *
> * Note: If the passed end of range value is beyond the end of file, but
> * not LLONG_MAX this routine still performs a hole punch operation.
> */
> @@ -423,32 +422,14 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart,
>
> for (i = 0; i < pagevec_count(&pvec); ++i) {
>
-aneesh
Powered by blists - more mailing lists