[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190220220910.265bff9a7695540ee4121b80@linux-foundation.org>
Date: Wed, 20 Feb 2019 22:09:10 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Mike Kravetz <mike.kravetz@...cle.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Michal Hocko <mhocko@...nel.org>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
Andrea Arcangeli <aarcange@...hat.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Davidlohr Bueso <dave@...olabs.net>, stable@...r.kernel.org
Subject: Re: [PATCH] huegtlbfs: fix races and page leaks during migration
On Tue, 12 Feb 2019 14:14:00 -0800 Mike Kravetz <mike.kravetz@...cle.com> wrote:
> hugetlb pages should only be migrated if they are 'active'. The routines
> set/clear_page_huge_active() modify the active state of hugetlb pages.
> When a new hugetlb page is allocated at fault time, set_page_huge_active
> is called before the page is locked. Therefore, another thread could
> race and migrate the page while it is being added to page table by the
> fault code. This race is somewhat hard to trigger, but can be seen by
> strategically adding udelay to simulate worst case scheduling behavior.
> Depending on 'how' the code races, various BUG()s could be triggered.
>
> To address this issue, simply delay the set_page_huge_active call until
> after the page is successfully added to the page table.
>
> Hugetlb pages can also be leaked at migration time if the pages are
> associated with a file in an explicitly mounted hugetlbfs filesystem.
> For example, a test program which hole punches, faults and migrates
> pages in such a file (1G in size) will eventually fail because it
> can not allocate a page. Reported counts and usage at time of failure:
>
> node0
> 537 free_hugepages
> 1024 nr_hugepages
> 0 surplus_hugepages
> node1
> 1000 free_hugepages
> 1024 nr_hugepages
> 0 surplus_hugepages
>
> Filesystem Size Used Avail Use% Mounted on
> nodev 4.0G 4.0G 0 100% /var/opt/hugepool
>
> Note that the filesystem shows 4G of pages used, while actual usage is
> 511 pages (just under 1G). Failed trying to allocate page 512.
>
> If a hugetlb page is associated with an explicitly mounted filesystem,
> this information in contained in the page_private field. At migration
> time, this information is not preserved. To fix, simply transfer
> page_private from old to new page at migration time if necessary.
>
> Cc: <stable@...r.kernel.org>
> Fixes: bcc54222309c ("mm: hugetlb: introduce page_huge_active")
> Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
cc:stable. It would be nice to get some review of this one, please?
> --- a/fs/hugetlbfs/inode.c
> +++ b/fs/hugetlbfs/inode.c
> @@ -859,6 +859,18 @@ static int hugetlbfs_migrate_page(struct address_space *mapping,
> rc = migrate_huge_page_move_mapping(mapping, newpage, page);
> if (rc != MIGRATEPAGE_SUCCESS)
> return rc;
> +
> + /*
> + * page_private is subpool pointer in hugetlb pages. Transfer to
> + * new page. PagePrivate is not associated with page_private for
> + * hugetlb pages and can not be set here as only page_huge_active
> + * pages can be migrated.
> + */
> + if (page_private(page)) {
> + set_page_private(newpage, page_private(page));
> + set_page_private(page, 0);
> + }
> +
> if (mode != MIGRATE_SYNC_NO_COPY)
> migrate_page_copy(newpage, page);
> else
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index a80832487981..f859e319e3eb 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -3625,7 +3625,6 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma,
> copy_user_huge_page(new_page, old_page, address, vma,
> pages_per_huge_page(h));
> __SetPageUptodate(new_page);
> - set_page_huge_active(new_page);
>
> mmun_start = haddr;
> mmun_end = mmun_start + huge_page_size(h);
> @@ -3647,6 +3646,7 @@ static vm_fault_t hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma,
> make_huge_pte(vma, new_page, 1));
> page_remove_rmap(old_page, true);
> hugepage_add_new_anon_rmap(new_page, vma, haddr);
> + set_page_huge_active(new_page);
> /* Make the old page be freed below */
> new_page = old_page;
> }
> @@ -3792,7 +3792,6 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
> }
> clear_huge_page(page, address, pages_per_huge_page(h));
> __SetPageUptodate(page);
> - set_page_huge_active(page);
>
> if (vma->vm_flags & VM_MAYSHARE) {
> int err = huge_add_to_page_cache(page, mapping, idx);
> @@ -3863,6 +3862,10 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm,
> }
>
> spin_unlock(ptl);
> +
> + /* May already be set if not newly allocated page */
> + set_page_huge_active(page);
> +
> unlock_page(page);
> out:
> return ret;
> @@ -4097,7 +4100,6 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm,
> * the set_pte_at() write.
> */
> __SetPageUptodate(page);
> - set_page_huge_active(page);
>
> mapping = dst_vma->vm_file->f_mapping;
> idx = vma_hugecache_offset(h, dst_vma, dst_addr);
> @@ -4165,6 +4167,7 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm,
> update_mmu_cache(dst_vma, dst_addr, dst_pte);
>
> spin_unlock(ptl);
> + set_page_huge_active(page);
> if (vm_shared)
> unlock_page(page);
> ret = 0;
> --
> 2.17.2
Powered by blists - more mailing lists