lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 2 Dec 2009 22:16:02 +0000
From:	Mel Gorman <mel@....ul.ie>
To:	Hugh Dickins <hugh.dickins@...cali.co.uk>
Cc:	Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] hugetlb: Acquire the i_mmap_lock before walking the
	prio_tree to unmap a page

On Wed, Dec 02, 2009 at 08:13:39PM +0000, Hugh Dickins wrote:
> On Wed, 2 Dec 2009, Mel Gorman wrote:
> 
> > When the owner of a mapping fails COW because a child process is holding a
> > reference and no pages are available, the children VMAs are walked and the
> > page is unmapped. The i_mmap_lock is taken for the unmapping of the page but
> > not the walking of the prio_tree. In theory, that tree could be changing
> > while the lock is released although in practice it is protected by the
> > hugetlb_instantiation_mutex. This patch takes the i_mmap_lock properly for
> > the duration of the prio_tree walk in case the hugetlb_instantiation_mutex
> > ever goes away.
> > 
> > [hugh.dickins@...cali.co.uk: Spotted the problem in the first place]
> > Signed-off-by: Mel Gorman <mel@....ul.ie>
> 
> The patch looks good - thanks for taking care of that, Mel.
> 
> But the comment seems wrong to me: hugetlb_instantiation_mutex
> guards against concurrent hugetlb_fault()s; but the structure of
> the prio_tree shifts as vmas based on that inode are inserted into
> (mmap'ed) and removed from (munmap'ed) that tree (always while
> holding i_mmap_lock).  I don't see hugetlb_instantiation_mutex
> giving us any protection against this at present.
> 

You're right of course. I'll report without that nonsense included.

Thanks

> 
> > ---
> >  mm/hugetlb.c |    9 ++++++++-
> >  1 files changed, 8 insertions(+), 1 deletions(-)
> > 
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index a952cb8..5adc284 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -1906,6 +1906,12 @@ static int unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
> >  		+ (vma->vm_pgoff >> PAGE_SHIFT);
> >  	mapping = (struct address_space *)page_private(page);
> >  
> > +	/*
> > +	 * Take the mapping lock for the duration of the table walk. As
> > +	 * this mapping should be shared between all the VMAs,
> > +	 * __unmap_hugepage_range() is called as the lock is already held
> > +	 */
> > +	spin_lock(&mapping->i_mmap_lock);
> >  	vma_prio_tree_foreach(iter_vma, &iter, &mapping->i_mmap, pgoff, pgoff) {
> >  		/* Do not unmap the current VMA */
> >  		if (iter_vma == vma)
> > @@ -1919,10 +1925,11 @@ static int unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
> >  		 * from the time of fork. This would look like data corruption
> >  		 */
> >  		if (!is_vma_resv_set(iter_vma, HPAGE_RESV_OWNER))
> > -			unmap_hugepage_range(iter_vma,
> > +			__unmap_hugepage_range(iter_vma,
> >  				address, address + huge_page_size(h),
> >  				page);
> >  	}
> > +	spin_unlock(&mapping->i_mmap_lock);
> >  
> >  	return 1;
> >  }
> 

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ