lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 9 Sep 2015 17:27:16 +0200
From:	Vlastimil Babka <vbabka@...e.cz>
To:	"Kirill A. Shutemov" <kirill@...temov.name>,
	Sasha Levin <sasha.levin@...cle.com>,
	Rik van Riel <riel@...hat.com>,
	Andrew Morton <akpm@...ux-foundation.org>
Cc:	Andrey Konovalov <andreyknvl@...gle.com>,
	Dmitry Vyukov <dvyukov@...gle.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: Multiple potential races on vma->vm_flags

On 09/07/2015 01:40 PM, Kirill A. Shutemov wrote:
> On Sun, Sep 06, 2015 at 03:21:05PM -0400, Sasha Levin wrote:
>> ==================================================================
>> ThreadSanitizer: data-race in munlock_vma_pages_range
>>
>> Write of size 8 by thread T378 (K2633, CPU3):
>>   [<ffffffff81212579>] munlock_vma_pages_range+0x59/0x3e0 mm/mlock.c:425
>>   [<ffffffff81212ac9>] mlock_fixup+0x1c9/0x280 mm/mlock.c:549
>>   [<ffffffff81212ccc>] do_mlock+0x14c/0x180 mm/mlock.c:589
>>   [<     inlined    >] SyS_munlock+0x74/0xb0 SYSC_munlock mm/mlock.c:651
>>   [<ffffffff812130b4>] SyS_munlock+0x74/0xb0 mm/mlock.c:643
>>   [<ffffffff81eb352e>] entry_SYSCALL_64_fastpath+0x12/0x71
>> arch/x86/entry/entry_64.S:186
>
> ...
>
>> Previous read of size 8 by thread T398 (K2623, CPU2):
>>   [<ffffffff8121d198>] try_to_unmap_one+0x78/0x4f0 mm/rmap.c:1208
>>   [<     inlined    >] rmap_walk+0x147/0x450 rmap_walk_file mm/rmap.c:1540
>>   [<ffffffff8121e7b7>] rmap_walk+0x147/0x450 mm/rmap.c:1559
>>   [<ffffffff8121ef72>] try_to_munlock+0xa2/0xc0 mm/rmap.c:1423
>>   [<ffffffff81211bb0>] __munlock_isolated_page+0x30/0x60 mm/mlock.c:129
>>   [<ffffffff81212066>] __munlock_pagevec+0x236/0x3f0 mm/mlock.c:331
>>   [<ffffffff812128a0>] munlock_vma_pages_range+0x380/0x3e0 mm/mlock.c:476
>>   [<ffffffff81212ac9>] mlock_fixup+0x1c9/0x280 mm/mlock.c:549
>>   [<ffffffff81212ccc>] do_mlock+0x14c/0x180 mm/mlock.c:589
>>   [<     inlined    >] SyS_munlock+0x74/0xb0 SYSC_munlock mm/mlock.c:651
>>   [<ffffffff812130b4>] SyS_munlock+0x74/0xb0 mm/mlock.c:643
>>   [<ffffffff81eb352e>] entry_SYSCALL_64_fastpath+0x12/0x71
>> arch/x86/entry/entry_64.S:186
>
> Okay, the detected race is mlock/munlock vs. rmap.
>
> On rmap side we check vma->vm_flags in few places without taking
> vma->vm_mm->mmap_sem. The vma cannot be freed since we hold i_mmap_rwsem
> or anon_vma_lock, but nothing prevent vma->vm_flags from changing under
> us.
>
> In this particular case, speculative check in beginning of
> try_to_unmap_one() is fine, since we re-check it under mmap_sem later in
> the function.
>
> False-negative is fine too here, since we will mlock the page in
> __mm_populate() on mlock side after mlock_fixup().
>
> BUT.
>
> We *must* have all speculative vm_flags accesses wrapped READ_ONCE() to
> avoid all compiler trickery, like duplication vm_flags access with
> inconsistent results.

Doesn't taking a semaphore, as in try_to_unmap_one(), already imply a 
compiler barrier forcing vm_flags to be re-read?

> I looked only on VM_LOCKED checks, but there are few other flags checked
> in rmap. All of them must be handled carefully. At least READ_ONCE() is
> required.
>
> Other solution would be to introduce per-vma spinlock to protect
> vma->vm_flags and probably other vma fields and offload this duty
> from mmap_sem.
> But that's much bigger project.

Sounds like an overkill, unless we find something more serious than this.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists