lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 1 Jun 2010 15:04:02 -0700
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	Rik van Riel <riel@...hat.com>
Cc:	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	Mel Gorman <mel@....ul.ie>,
	Andrea Arcangeli <aarcange@...hat.com>,
	Minchan Kim <minchan.kim@...il.com>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	Lee Schermerhorn <Lee.Schermerhorn@...com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: [PATCH 2/5] change direct call of spin_lock(anon_vma->lock) to
 inline function

On Wed, 26 May 2010 15:39:26 -0400
Rik van Riel <riel@...hat.com> wrote:

> @@ -303,10 +303,10 @@ again:
>  		goto out;
>  
>  	anon_vma = (struct anon_vma *) (anon_mapping - PAGE_MAPPING_ANON);
> -	spin_lock(&anon_vma->lock);
> +	anon_vma_lock(anon_vma);
>  
>  	if (page_rmapping(page) != anon_vma) {
> -		spin_unlock(&anon_vma->lock);
> +		anon_vma_unlock(anon_vma);
>  		goto again;
>  	}
>  

This bit is dependent upon Peter's
mm-revalidate-anon_vma-in-page_lock_anon_vma.patch (below).  I've been
twiddling thumbs for weeks awaiting the updated version of that patch
(hint).

Do we think that this patch series is needed in 2.6.35?  If so, why? 
And if so I guess we'll need to route around
mm-revalidate-anon_vma-in-page_lock_anon_vma.patch, or just merge it
as-is.


From: Peter Zijlstra <a.p.zijlstra@...llo.nl>

There is nothing preventing the anon_vma from being detached while we are
spinning to acquire the lock.  Most (all?) current users end up calling
something like vma_address(page, vma) on it, which has a fairly good
chance of weeding out wonky vmas.

However suppose the anon_vma got freed and re-used while we were waiting
to acquire the lock, and the new anon_vma fits with the page->index
(because that is the only thing vma_address() uses to determine if the
page fits in a particular vma, we could end up traversing faulty anon_vma
chains.

Close this hole for good by re-validating that page->mapping still holds
the very same anon_vma pointer after we acquire the lock, if not be
utterly paranoid and retry the whole operation (which will very likely
bail, because it's unlikely the page got attached to a different anon_vma
in the meantime).

Signed-off-by: Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc: Hugh Dickins <hugh.dickins@...cali.co.uk>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Reviewed-by: Rik van Riel <riel@...hat.com>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
---

 mm/rmap.c |    7 +++++++
 1 file changed, 7 insertions(+)

diff -puN mm/rmap.c~mm-revalidate-anon_vma-in-page_lock_anon_vma mm/rmap.c
--- a/mm/rmap.c~mm-revalidate-anon_vma-in-page_lock_anon_vma
+++ a/mm/rmap.c
@@ -370,6 +370,7 @@ struct anon_vma *page_lock_anon_vma(stru
 	unsigned long anon_mapping;
 
 	rcu_read_lock();
+again:
 	anon_mapping = (unsigned long) ACCESS_ONCE(page->mapping);
 	if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
 		goto out;
@@ -378,6 +379,12 @@ struct anon_vma *page_lock_anon_vma(stru
 
 	anon_vma = (struct anon_vma *) (anon_mapping - PAGE_MAPPING_ANON);
 	spin_lock(&anon_vma->lock);
+
+	if (page_rmapping(page) != anon_vma) {
+		spin_unlock(&anon_vma->lock);
+		goto again;
+	}
+
 	return anon_vma;
 out:
 	rcu_read_unlock();
_

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists