lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <acd70cd1-e930-8cbe-b962-da0715c9d311@nvidia.com>
Date:   Wed, 7 Dec 2022 15:25:59 -0800
From:   John Hubbard <jhubbard@...dia.com>
To:     Peter Xu <peterx@...hat.com>, <linux-mm@...ck.org>,
        <linux-kernel@...r.kernel.org>
CC:     Muchun Song <songmuchun@...edance.com>,
        Andrea Arcangeli <aarcange@...hat.com>,
        James Houghton <jthoughton@...gle.com>,
        Jann Horn <jannh@...gle.com>, Rik van Riel <riel@...riel.com>,
        Miaohe Lin <linmiaohe@...wei.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        "Mike Kravetz" <mike.kravetz@...cle.com>,
        David Hildenbrand <david@...hat.com>,
        Nadav Amit <nadav.amit@...il.com>
Subject: Re: [PATCH v2 07/10] mm/hugetlb: Make follow_hugetlb_page() safe to
 pmd unshare

On 12/7/22 12:30, Peter Xu wrote:
> Since follow_hugetlb_page() walks the pgtable, it needs the vma lock
> to make sure the pgtable page will not be freed concurrently.
> 
> Acked-by: David Hildenbrand <david@...hat.com>
> Reviewed-by: Mike Kravetz <mike.kravetz@...cle.com>
> Signed-off-by: Peter Xu <peterx@...hat.com>
> ---
>   mm/hugetlb.c | 7 +++++++
>   1 file changed, 7 insertions(+)

Reviewed-by: John Hubbard <jhubbard@...dia.com>


thanks,
-- 
John Hubbard
NVIDIA

> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 3fbbd599d015..f42399522805 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -6284,6 +6284,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
>   			break;
>   		}
>   
> +		hugetlb_vma_lock_read(vma);
>   		/*
>   		 * Some archs (sparc64, sh*) have multiple pte_ts to
>   		 * each hugepage.  We have to make sure we get the
> @@ -6308,6 +6309,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
>   		    !hugetlbfs_pagecache_present(h, vma, vaddr)) {
>   			if (pte)
>   				spin_unlock(ptl);
> +			hugetlb_vma_unlock_read(vma);
>   			remainder = 0;
>   			break;
>   		}
> @@ -6329,6 +6331,8 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
>   
>   			if (pte)
>   				spin_unlock(ptl);
> +			hugetlb_vma_unlock_read(vma);
> +
>   			if (flags & FOLL_WRITE)
>   				fault_flags |= FAULT_FLAG_WRITE;
>   			else if (unshare)
> @@ -6388,6 +6392,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
>   			remainder -= pages_per_huge_page(h);
>   			i += pages_per_huge_page(h);
>   			spin_unlock(ptl);
> +			hugetlb_vma_unlock_read(vma);
>   			continue;
>   		}
>   
> @@ -6415,6 +6420,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
>   			if (WARN_ON_ONCE(!try_grab_folio(pages[i], refs,
>   							 flags))) {
>   				spin_unlock(ptl);
> +				hugetlb_vma_unlock_read(vma);
>   				remainder = 0;
>   				err = -ENOMEM;
>   				break;
> @@ -6426,6 +6432,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
>   		i += refs;
>   
>   		spin_unlock(ptl);
> +		hugetlb_vma_unlock_read(vma);
>   	}
>   	*nr_pages = remainder;
>   	/*

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ