[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55F2FC87.6060908@suse.cz>
Date: Fri, 11 Sep 2015 18:08:39 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: "Kirill A. Shutemov" <kirill@...temov.name>,
Andrey Konovalov <andreyknvl@...gle.com>,
Oleg Nesterov <oleg@...hat.com>
Cc: Sasha Levin <sasha.levin@...cle.com>,
Rik van Riel <riel@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Dmitry Vyukov <dvyukov@...gle.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Hugh Dickins <hughd@...gle.com>
Subject: Re: Multiple potential races on vma->vm_flags
On 09/11/2015 05:29 PM, Vlastimil Babka wrote:
> On 09/11/2015 12:39 PM, Kirill A. Shutemov wrote:
>> On Thu, Sep 10, 2015 at 03:27:59PM +0200, Andrey Konovalov wrote:
>>> Can a vma be shared among a few mm's?
>>
>> Define "shared".
>>
>> vma can belong only to one process (mm_struct), but it can be accessed
>> from other process like in rmap case below.
>>
>> rmap uses anon_vma_lock for anon vma and i_mmap_rwsem for file vma to make
>> sure that the vma will not disappear under it.
>>
>>> If yes, then taking current->mm->mmap_sem to protect vma is not enough.
>>
>> Depends on what protection you are talking about.
>>
>>> In the first report below both T378 and T398 take
>>> current->mm->mmap_sem at mm/mlock.c:650, but they turn out to be
>>> different locks (the addresses are different).
>>
>> See i_mmap_lock_read() in T398. It will guarantee that vma is there.
>>
>>> In the second report T309 doesn't take any locks at all, since it
>>> assumes that after checking atomic_dec_and_test(&mm->mm_users) the mm
>>> has no other users, but then it does a write to vma.
>>
>> This one is tricky. I *assume* the mm cannot be generally accessible after
>> mm_users drops to zero, but I'm not entirely sure about it.
>> procfs? ptrace?
>>
>> The VMA is still accessible via rmap at this point. And I think it can be
>> a problem:
>>
>> CPU0 CPU1
>> exit_mmap()
>> // mmap_sem is *not* taken
>> munlock_vma_pages_all()
>> munlock_vma_pages_range()
>> try_to_unmap_one()
>> down_read_trylock(&vma->vm_mm->mmap_sem))
>> !!(vma->vm_flags & VM_LOCKED) == true
>> vma->vm_flags &= ~VM_LOCKED;
>> <munlock the page>
>> mlock_vma_page(page);
>> // mlocked pages is leaked.
>>
>> The obvious solution is to take mmap_sem in exit path, but it would cause
>> performance regression.
>>
>> Any comments?
>
> Just so others don't repeat the paths that I already looked at:
>
> - First I thought that try_to_unmap_one() has the page locked and
> munlock_vma_pages_range() will also lock it... but it doesn't.
More precisely, it does (in __munlock_pagevec()), but
TestClearPageMlocked(page) doesn't happen under that lock.
> - Then I thought that exit_mmap() will revisit the page anyway doing
> actual unmap. It would, if it's the one who has the page mapped, it will
> clear the mlock (see page_remove_rmap()). If it's not the last one, page
> will be left locked. So it won't be completely leaked, but still, it
> will be mlocked when it shouldn't.
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists