[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpEkujbHNxNWcWr8bmrsMhXGcpDyraOfQaPAcOH=RQPv5A@mail.gmail.com>
Date: Thu, 16 Feb 2023 11:36:24 -0800
From: Suren Baghdasaryan <surenb@...gle.com>
To: "Liam R. Howlett" <Liam.Howlett@...cle.com>,
Suren Baghdasaryan <surenb@...gle.com>,
akpm@...ux-foundation.org, michel@...pinasse.org,
jglisse@...gle.com, mhocko@...e.com, vbabka@...e.cz,
hannes@...xchg.org, mgorman@...hsingularity.net, dave@...olabs.net,
willy@...radead.org, peterz@...radead.org, ldufour@...ux.ibm.com,
paulmck@...nel.org, mingo@...hat.com, will@...nel.org,
luto@...nel.org, songliubraving@...com, peterx@...hat.com,
david@...hat.com, dhowells@...hat.com, hughd@...gle.com,
bigeasy@...utronix.de, kent.overstreet@...ux.dev,
punit.agrawal@...edance.com, lstoakes@...il.com,
peterjung1337@...il.com, rientjes@...gle.com, chriscli@...gle.com,
axelrasmussen@...gle.com, joelaf@...gle.com, minchan@...gle.com,
rppt@...nel.org, jannh@...gle.com, shakeelb@...gle.com,
tatashin@...gle.com, edumazet@...gle.com, gthelen@...gle.com,
gurua@...gle.com, arjunroy@...gle.com, soheil@...gle.com,
leewalsh@...gle.com, posk@...gle.com,
michalechner92@...glemail.com, linux-mm@...ck.org,
linux-arm-kernel@...ts.infradead.org,
linuxppc-dev@...ts.ozlabs.org, x86@...nel.org,
linux-kernel@...r.kernel.org, kernel-team@...roid.com
Subject: Re: [PATCH v3 21/35] mm/mmap: write-lock adjacent VMAs if they can
grow into unmapped area
On Thu, Feb 16, 2023 at 7:34 AM Liam R. Howlett <Liam.Howlett@...cle.com> wrote:
>
>
> First, sorry I didn't see this before v3..
Feedback at any time is highly appreciated!
>
> * Suren Baghdasaryan <surenb@...gle.com> [230216 00:18]:
> > While unmapping VMAs, adjacent VMAs might be able to grow into the area
> > being unmapped. In such cases write-lock adjacent VMAs to prevent this
> > growth.
> >
> > Signed-off-by: Suren Baghdasaryan <surenb@...gle.com>
> > ---
> > mm/mmap.c | 8 +++++---
> > 1 file changed, 5 insertions(+), 3 deletions(-)
> >
> > diff --git a/mm/mmap.c b/mm/mmap.c
> > index 118b2246bba9..00f8c5798936 100644
> > --- a/mm/mmap.c
> > +++ b/mm/mmap.c
> > @@ -2399,11 +2399,13 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
> > * down_read(mmap_lock) and collide with the VMA we are about to unmap.
> > */
> > if (downgrade) {
> > - if (next && (next->vm_flags & VM_GROWSDOWN))
> > + if (next && (next->vm_flags & VM_GROWSDOWN)) {
> > + vma_start_write(next);
> > downgrade = false;
>
> If the mmap write lock is insufficient to protect us from next/prev
> modifications then we need to move *most* of this block above the maple
> tree write operation, otherwise we have a race here. When I say most, I
> mean everything besides the call to mmap_write_downgrade() needs to be
> moved.
Which prior maple tree write operation are you referring to? I see
__split_vma() and munmap_sidetree() which both already lock the VMAs
they operate on, so page faults can't happen in those VMAs.
>
> If the mmap write lock is sufficient to protect us from next/prev
> modifications then we don't need to write lock the vmas themselves.
mmap write lock is not sufficient because with per-VMA locks we do not
take mmap lock at all.
>
> I believe this is for expand_stack() protection, so I believe it's okay
> to not vma write lock these vmas.. I don't think there are other areas
> where we can modify the vmas without holding the mmap lock, but others
> on the CC list please chime in if I've forgotten something.
>
> So, if I am correct, then you shouldn't lock next/prev and allow the
> vma locking fault method on these vmas. This will work because
> lock_vma_under_rcu() uses mas_walk() on the faulting address. That is,
> your lock_vma_under_rcu() will fail to find anything that needs to be
> grown and go back to mmap lock protection. As it is written today, the
> vma locking fault handler will fail and we will wait for the mmap lock
> to be released even when the vma isn't going to expand.
So, let's consider a case when the next VMA is not being removed (so
it was neither removed nor locked by munmap_sidetree()) and it is
found by lock_vma_under_rcu() in the page fault handling path. Page
fault handler can now expand it and push into the area we are
unmapping in unmap_region(). That is the race I'm trying to prevent
here by locking the next/prev VMAs which can be expanded before
unmap_region() unmaps them. Am I missing something?
>
>
> > - else if (prev && (prev->vm_flags & VM_GROWSUP))
> > + } else if (prev && (prev->vm_flags & VM_GROWSUP)) {
> > + vma_start_write(prev);
> > downgrade = false;
> > - else
> > + } else
> > mmap_write_downgrade(mm);
> > }
> >
> > --
> > 2.39.1
>
> --
> To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@...roid.com.
>
Powered by blists - more mailing lists