lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y46Af21YstNXfvW6@x1n>
Date:   Mon, 5 Dec 2022 18:36:31 -0500
From:   Peter Xu <peterx@...hat.com>
To:     Mike Kravetz <mike.kravetz@...cle.com>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        James Houghton <jthoughton@...gle.com>,
        Jann Horn <jannh@...gle.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Andrea Arcangeli <aarcange@...hat.com>,
        Rik van Riel <riel@...riel.com>,
        Nadav Amit <nadav.amit@...il.com>,
        Miaohe Lin <linmiaohe@...wei.com>,
        Muchun Song <songmuchun@...edance.com>,
        David Hildenbrand <david@...hat.com>
Subject: Re: [PATCH 04/10] mm/hugetlb: Move swap entry handling into vma lock
 when faulted

On Mon, Dec 05, 2022 at 02:14:38PM -0800, Mike Kravetz wrote:
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index dfe677fadaf8..776e34ccf029 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -5826,22 +5826,6 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
> >  	int need_wait_lock = 0;
> >  	unsigned long haddr = address & huge_page_mask(h);
> >  
> > -	ptep = huge_pte_offset(mm, haddr, huge_page_size(h));
> > -	if (ptep) {
> > -		/*
> > -		 * Since we hold no locks, ptep could be stale.  That is
> > -		 * OK as we are only making decisions based on content and
> > -		 * not actually modifying content here.
> > -		 */
> > -		entry = huge_ptep_get(ptep);
> > -		if (unlikely(is_hugetlb_entry_migration(entry))) {
> > -			migration_entry_wait_huge(vma, ptep);
> > -			return 0;
> > -		} else if (unlikely(is_hugetlb_entry_hwpoisoned(entry)))
> > -			return VM_FAULT_HWPOISON_LARGE |
> > -				VM_FAULT_SET_HINDEX(hstate_index(h));
> > -	}
> > -
> 
> Before acquiring the vma_lock, there is this comment:
> 
> 	/*
> 	 * Acquire vma lock before calling huge_pte_alloc and hold
> 	 * until finished with ptep.  This prevents huge_pmd_unshare from
> 	 * being called elsewhere and making the ptep no longer valid.
> 	 *
> 	 * ptep could have already be assigned via hugetlb_walk().  That
> 	 * is OK, as huge_pte_alloc will return the same value unless
> 	 * something has changed.
> 	 */
> 
> The second sentence in that comment about ptep being already assigned no
> longer applies and can be deleted.

Correct, this can be removed.

One thing to mention is there's an inline touch-up above in the last patch
of the series when introducing hugetlb_walk() to s/pte_offset/walk/, but I
saw that Andrew has already done the right thing on the fixup one in his
local tree, so I think we're all good.

Thanks!

-- 
Peter Xu

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ