lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 6 Dec 2022 11:58:09 -0800
From:   Mike Kravetz <mike.kravetz@...cle.com>
To:     Peter Xu <peterx@...hat.com>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        James Houghton <jthoughton@...gle.com>,
        Jann Horn <jannh@...gle.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Andrea Arcangeli <aarcange@...hat.com>,
        Rik van Riel <riel@...riel.com>,
        Nadav Amit <nadav.amit@...il.com>,
        Miaohe Lin <linmiaohe@...wei.com>,
        Muchun Song <songmuchun@...edance.com>,
        David Hildenbrand <david@...hat.com>
Subject: Re: [PATCH 09/10] mm/hugetlb: Make page_vma_mapped_walk() safe to
 pmd unshare

On 12/06/22 12:43, Peter Xu wrote:
> On Tue, Dec 06, 2022 at 12:39:53PM -0500, Peter Xu wrote:
> > On Tue, Dec 06, 2022 at 09:10:00AM -0800, Mike Kravetz wrote:
> > > On 12/05/22 15:52, Mike Kravetz wrote:
> > > > On 11/29/22 14:35, Peter Xu wrote:
> > > > > Since page_vma_mapped_walk() walks the pgtable, it needs the vma lock
> > > > > to make sure the pgtable page will not be freed concurrently.
> > > > > 
> > > > > Signed-off-by: Peter Xu <peterx@...hat.com>
> > > > > ---
> > > > >  include/linux/rmap.h | 4 ++++
> > > > >  mm/page_vma_mapped.c | 5 ++++-
> > > > >  2 files changed, 8 insertions(+), 1 deletion(-)
> > > > > 
> > > > > diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> > > > > index bd3504d11b15..a50d18bb86aa 100644
> > > > > --- a/include/linux/rmap.h
> > > > > +++ b/include/linux/rmap.h
> > > > > @@ -13,6 +13,7 @@
> > > > >  #include <linux/highmem.h>
> > > > >  #include <linux/pagemap.h>
> > > > >  #include <linux/memremap.h>
> > > > > +#include <linux/hugetlb.h>
> > > > >  
> > > > >  /*
> > > > >   * The anon_vma heads a list of private "related" vmas, to scan if
> > > > > @@ -408,6 +409,9 @@ static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw)
> > > > >  		pte_unmap(pvmw->pte);
> > > > >  	if (pvmw->ptl)
> > > > >  		spin_unlock(pvmw->ptl);
> > > > > +	/* This needs to be after unlock of the spinlock */
> > > > > +	if (is_vm_hugetlb_page(pvmw->vma))
> > > > > +		hugetlb_vma_unlock_read(pvmw->vma);
> > > > >  }
> > > > >  
> > > > >  bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw);
> > > > > diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> > > > > index 93e13fc17d3c..f94ec78b54ff 100644
> > > > > --- a/mm/page_vma_mapped.c
> > > > > +++ b/mm/page_vma_mapped.c
> > > > > @@ -169,10 +169,13 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
> > > > >  		if (pvmw->pte)
> > > > >  			return not_found(pvmw);
> > > > >  
> > > > > +		hugetlb_vma_lock_read(vma);
> > > > >  		/* when pud is not present, pte will be NULL */
> > > > >  		pvmw->pte = huge_pte_offset(mm, pvmw->address, size);
> > > > > -		if (!pvmw->pte)
> > > > > +		if (!pvmw->pte) {
> > > > > +			hugetlb_vma_unlock_read(vma);
> > > > >  			return false;
> > > > > +		}
> > > > >  
> > > > >  		pvmw->ptl = huge_pte_lock(hstate, mm, pvmw->pte);
> > > > >  		if (!check_pte(pvmw))
> > > > 
> > > > I think this is going to cause try_to_unmap() to always fail for hugetlb
> > > > shared pages.  See try_to_unmap_one:
> > > > 
> > > > 	while (page_vma_mapped_walk(&pvmw)) {
> > > > 		...
> > > > 		if (folio_test_hugetlb(folio)) {
> > > > 			...
> > > > 			/*
> > > >                          * To call huge_pmd_unshare, i_mmap_rwsem must be
> > > >                          * held in write mode.  Caller needs to explicitly
> > > >                          * do this outside rmap routines.
> > > >                          *
> > > >                          * We also must hold hugetlb vma_lock in write mode.
> > > >                          * Lock order dictates acquiring vma_lock BEFORE
> > > >                          * i_mmap_rwsem.  We can only try lock here and fail
> > > >                          * if unsuccessful.
> > > >                          */
> > > >                         if (!anon) {
> > > >                                 VM_BUG_ON(!(flags & TTU_RMAP_LOCKED));
> > > >                                 if (!hugetlb_vma_trylock_write(vma)) {
> > > >                                         page_vma_mapped_walk_done(&pvmw);
> > > >                                         ret = false;
> > > > 				}
> > > > 
> > > > 
> > > > Can not think of a great solution right now.
> > > 
> > > Thought of this last night ...
> > > 
> > > Perhaps we do not need vma_lock in this code path (not sure about all
> > > page_vma_mapped_walk calls).  Why?  We already hold i_mmap_rwsem.
> > 
> > Exactly.  The only concern is when it's not in a rmap.
> > 
> > I'm actually preparing something that adds a new flag to PVMW, like:
> > 
> > #define PVMW_HUGETLB_NEEDS_LOCK	(1 << 2)
> > 
> > But maybe we don't need that at all, since I had a closer look the only
> > outliers of not using a rmap is:
> > 
> > __replace_page
> > write_protect_page
> > 
> > I'm pretty sure ksm doesn't have hugetlb involved, then the other one is
> > uprobe (uprobe_write_opcode).  I think it's the same.  If it's true, we can
> > simply drop this patch.  Then we also have hugetlb_walk and the lock checks
> > there guarantee that we're safe anyways.
> > 
> > Potentially we can document this fact, which I also attached a comment
> > patch just for it to be appended to the end of the patchset.
> > 
> > Mike, let me know what do you think.
> > 
> > Andrew, if this patch to be dropped then the last patch may not cleanly
> > apply.  Let me know if you want a full repost of the things.
> 
> The document patch that can be appended to the end of this series attached.
> I referenced hugetlb_walk() so it needs to be the last patch.
> 
> -- 
> Peter Xu

Agree with dropping this patch and adding the document patch below.

Reviewed-by: Mike Kravetz <mike.kravetz@...cle.com>

Also, happy we have the warnings in place to catch incorrect locking.
-- 
Mike Kravetz

> From 754c2180804e9e86accf131573cbd956b8c62829 Mon Sep 17 00:00:00 2001
> From: Peter Xu <peterx@...hat.com>
> Date: Tue, 6 Dec 2022 12:36:04 -0500
> Subject: [PATCH] mm/hugetlb: Document why page_vma_mapped_walk() is safe to
>  walk
> Content-type: text/plain
> 
> Taking vma lock here is not needed for now because all potential hugetlb
> walkers here should have i_mmap_rwsem held.  Document the fact.
> 
> Signed-off-by: Peter Xu <peterx@...hat.com>
> ---
>  mm/page_vma_mapped.c | 10 ++++++++--
>  1 file changed, 8 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> index e97b2e23bd28..2e59a0419d22 100644
> --- a/mm/page_vma_mapped.c
> +++ b/mm/page_vma_mapped.c
> @@ -168,8 +168,14 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
>  		/* The only possible mapping was handled on last iteration */
>  		if (pvmw->pte)
>  			return not_found(pvmw);
> -
> -		/* when pud is not present, pte will be NULL */
> +		/*
> +		 * NOTE: we don't need explicit lock here to walk the
> +		 * hugetlb pgtable because either (1) potential callers of
> +		 * hugetlb pvmw currently holds i_mmap_rwsem, or (2) the
> +		 * caller will not walk a hugetlb vma (e.g. ksm or uprobe).
> +		 * When one day this rule breaks, one will get a warning
> +		 * in hugetlb_walk(), and then we'll figure out what to do.
> +		 */
>  		pvmw->pte = hugetlb_walk(vma, pvmw->address, size);
>  		if (!pvmw->pte)
>  			return false;
> -- 
> 2.37.3
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ