lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 18 Nov 2011 09:42:05 +0800
From:	Nai Xia <nai.xia@...il.com>
To:	Andrea Arcangeli <aarcange@...hat.com>
Cc:	Hugh Dickins <hughd@...gle.com>, Mel Gorman <mgorman@...e.de>,
	Pawel Sikora <pluto@...k.net>,
	Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
	jpiszcz@...idpixels.com, arekm@...-linux.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mremap: enforce rmap src/dst vma ordering in case of
 vma_merge succeeding in copy_vma

On Fri, Nov 18, 2011 at 2:42 AM, Andrea Arcangeli <aarcange@...hat.com> wrote:
> Hi Hugh,
>
> On Wed, Nov 16, 2011 at 04:16:57PM -0800, Hugh Dickins wrote:
>> As you found, the mremap locking long predates truncation's double unmap.
>>
>> That's an interesting point, and you may be right - though, what about
>> the *very* unlikely case where unmap_mapping_range looks at new vma
>> when pte is in old, then at old vma when pte is in new, then
>> move_page_tables runs out of memory and cannot complete, then the
>> second unmap_mapping_range looks at old vma while pte is still in new
>> (I guess this needs some other activity to have jumbled the prio_tree,
>> and may just be impossible), then at new (to be abandoned) vma after
>> pte has moved back to old.
>
> I tend to think it should still work fine. The second loop is needed
> to take care of the "reverse" order. If the first move_page_tables is
> not in order the second move_page_tables will be in order. So it
> should catch it. If the first move_page_tables is in order, the double
> loop will catch any skip in the second move_page_tables.


First of all, I believe that at the POSIX level, it's ok for
truncate_inode_page()
not scanning  COWed pages, since basically we does not provide any guarantee
for privately mapped file pages for this behavior. But missing a file
mapped pte after its
cache page is already removed from the the page cache is a
fundermental malfuntion for
a shared mapping when some threads see the file cache page is gone
while some thread
is still r/w from/to it! No matter how short the gap between
truncate_inode_page() and
the second loop, this is wrong.

Second, even if the we don't care about this POSIX flaw that may
introduce, a pte can still
missed by the second loop. mremap can happen serveral times during
these non-atomic
firstpass-trunc-secondpass operations, a proper events can happily
make the wrong order
for every scan, and miss them all -- That's just what in Hugh's mind
in the post you just
replied. Without lock and proper ordering( which patial mremap cannot provide),
this *will* happen.

You may disagree with me and have that locking removed, and I am
already have that
one line patch prepared waiting fora bug bumpping up again, what a
cheap patch submission!

:P


Thanks,

Nai
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ