[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LSU.2.11.1509111734480.7660@eggly.anvils>
Date: Fri, 11 Sep 2015 18:27:14 -0700 (PDT)
From: Hugh Dickins <hughd@...gle.com>
To: "Kirill A. Shutemov" <kirill@...temov.name>
cc: Andrey Konovalov <andreyknvl@...gle.com>,
Oleg Nesterov <oleg@...hat.com>,
Sasha Levin <sasha.levin@...cle.com>,
Rik van Riel <riel@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Dmitry Vyukov <dvyukov@...gle.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Vlastimil Babka <vbabka@...e.cz>,
Hugh Dickins <hughd@...gle.com>
Subject: Re: Multiple potential races on vma->vm_flags
On Fri, 11 Sep 2015, Kirill A. Shutemov wrote:
> On Thu, Sep 10, 2015 at 03:27:59PM +0200, Andrey Konovalov wrote:
> > Can a vma be shared among a few mm's?
>
> Define "shared".
>
> vma can belong only to one process (mm_struct), but it can be accessed
> from other process like in rmap case below.
>
> rmap uses anon_vma_lock for anon vma and i_mmap_rwsem for file vma to make
> sure that the vma will not disappear under it.
>
> > If yes, then taking current->mm->mmap_sem to protect vma is not enough.
>
> Depends on what protection you are talking about.
>
> > In the first report below both T378 and T398 take
> > current->mm->mmap_sem at mm/mlock.c:650, but they turn out to be
> > different locks (the addresses are different).
>
> See i_mmap_lock_read() in T398. It will guarantee that vma is there.
>
> > In the second report T309 doesn't take any locks at all, since it
> > assumes that after checking atomic_dec_and_test(&mm->mm_users) the mm
> > has no other users, but then it does a write to vma.
>
> This one is tricky. I *assume* the mm cannot be generally accessible after
> mm_users drops to zero, but I'm not entirely sure about it.
> procfs? ptrace?
Most of the things (including procfs and ptrace) that need to work on
a foreign mm do take a hold on mm_users with get_task_mm(). swapoff
uses atomic_inc_not_zero(&mm->mm_users). In KSM I managed to get away
with just a hold on the structure itself, atomic_inc(&mm->mm_count),
and a check for mm_users 0 wherever it down_reads mmap_sem (but Andrey
might like to turn KSM on: it wouldn't be entirely shocking if he were
to discover an anomaly from that).
>
> The VMA is still accessible via rmap at this point. And I think it can be
> a problem:
>
> CPU0 CPU1
> exit_mmap()
> // mmap_sem is *not* taken
> munlock_vma_pages_all()
> munlock_vma_pages_range()
> try_to_unmap_one()
> down_read_trylock(&vma->vm_mm->mmap_sem))
> !!(vma->vm_flags & VM_LOCKED) == true
> vma->vm_flags &= ~VM_LOCKED;
> <munlock the page>
> mlock_vma_page(page);
> // mlocked pages is leaked.
>
> The obvious solution is to take mmap_sem in exit path, but it would cause
> performance regression.
>
> Any comments?
I'm inclined to echo Vlastimil's comment from earlier in the thread:
sounds like an overkill, unless we find something more serious than this.
I'm not sure whether we'd actually see a regression from taking mmap_sem
in exit path; but given that it's mmap_sem, yes, history tells us please
not to take it any more than we have to.
I do remember wishing, when working out KSM's mm handling, that exit took
mmap_sem: it would have made it simpler, but that wasn't a change I dared
to make.
Maybe an mm_users 0 check after down_read_trylock in try_to_unmap_one()
could fix it?
But if we were to make a bigger change for this VM_LOCKED issue, and
something more serious makes it worth all the effort, I'd say that
what needs to be done is to give mlock/munlock proper locking (haha).
I have not yet looked at your mlocked THP patch (sorry), but when I
was doing the same thing for huge tmpfs, what made it so surprisingly
difficult was all the spongy trylocking, which concealed the rules.
Maybe I'm completely wrong, but I thought a lot of awkwardness might
disappear if they were relying on anon_vma->rwsem and i_mmap_rwsem
throughout instead of mmap_sem.
Hugh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists