[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y/8FNM9czzPHb5eG@localhost>
Date: Wed, 1 Mar 2023 07:56:36 +0000
From: Hyeonggon Yoo <42.hyeyoo@...il.com>
To: Suren Baghdasaryan <surenb@...gle.com>
Cc: akpm@...ux-foundation.org, michel@...pinasse.org,
jglisse@...gle.com, mhocko@...e.com, vbabka@...e.cz,
hannes@...xchg.org, mgorman@...hsingularity.net, dave@...olabs.net,
willy@...radead.org, liam.howlett@...cle.com, peterz@...radead.org,
ldufour@...ux.ibm.com, paulmck@...nel.org, mingo@...hat.com,
will@...nel.org, luto@...nel.org, songliubraving@...com,
peterx@...hat.com, david@...hat.com, dhowells@...hat.com,
hughd@...gle.com, bigeasy@...utronix.de, kent.overstreet@...ux.dev,
punit.agrawal@...edance.com, lstoakes@...il.com,
peterjung1337@...il.com, rientjes@...gle.com, chriscli@...gle.com,
axelrasmussen@...gle.com, joelaf@...gle.com, minchan@...gle.com,
rppt@...nel.org, jannh@...gle.com, shakeelb@...gle.com,
tatashin@...gle.com, edumazet@...gle.com, gthelen@...gle.com,
gurua@...gle.com, arjunroy@...gle.com, soheil@...gle.com,
leewalsh@...gle.com, posk@...gle.com,
michalechner92@...glemail.com, linux-mm@...ck.org,
linux-arm-kernel@...ts.infradead.org,
linuxppc-dev@...ts.ozlabs.org, x86@...nel.org,
linux-kernel@...r.kernel.org, kernel-team@...roid.com
Subject: Re: [PATCH v4 18/33] mm: write-lock VMAs before removing them from
VMA tree
On Wed, Mar 01, 2023 at 07:43:33AM +0000, Hyeonggon Yoo wrote:
> On Mon, Feb 27, 2023 at 09:36:17AM -0800, Suren Baghdasaryan wrote:
> > Write-locking VMAs before isolating them ensures that page fault
> > handlers don't operate on isolated VMAs.
> >
> > Signed-off-by: Suren Baghdasaryan <surenb@...gle.com>
> > ---
> > mm/mmap.c | 1 +
> > mm/nommu.c | 5 +++++
> > 2 files changed, 6 insertions(+)
> >
> > diff --git a/mm/mmap.c b/mm/mmap.c
> > index 1f42b9a52b9b..f7ed357056c4 100644
> > --- a/mm/mmap.c
> > +++ b/mm/mmap.c
> > @@ -2255,6 +2255,7 @@ int split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
> > static inline int munmap_sidetree(struct vm_area_struct *vma,
> > struct ma_state *mas_detach)
> > {
> > + vma_start_write(vma);
> > mas_set_range(mas_detach, vma->vm_start, vma->vm_end - 1);
>
> I may be missing something, but have few questions:
>
> 1) Why does a writer need to both write-lock a VMA and mark the VMA detached
> when unmapping it, isn't it enough to just only write-lock a VMA?
>
> 2) as VMAs that are going to be removed are already locked in vma_prepare(),
> so I think this hunk could be dropped?
After sending this just realized that I did not consider simple munmap case :)
But I still think 1) and 3) are valid question.
>
> > if (mas_store_gfp(mas_detach, vma, GFP_KERNEL))
> > return -ENOMEM;
> > diff --git a/mm/nommu.c b/mm/nommu.c
> > index 57ba243c6a37..2ab162d773e2 100644
> > --- a/mm/nommu.c
> > +++ b/mm/nommu.c
> > @@ -588,6 +588,7 @@ static int delete_vma_from_mm(struct vm_area_struct *vma)
> > current->pid);
> > return -ENOMEM;
> > }
> > + vma_start_write(vma);
> > cleanup_vma_from_mm(vma);
>
> 3) I think this hunk could be dropped as Per-VMA lock depends on MMU anyway.
>
> Thanks,
> Hyeonggon
>
> >
> > /* remove from the MM's tree and list */
> > @@ -1519,6 +1520,10 @@ void exit_mmap(struct mm_struct *mm)
> > */
> > mmap_write_lock(mm);
> > for_each_vma(vmi, vma) {
> > + /*
> > + * No need to lock VMA because this is the only mm user and no
> > + * page fault handled can race with it.
> > + */
> > cleanup_vma_from_mm(vma);
> > delete_vma(mm, vma);
> > cond_resched();
> > --
> > 2.39.2.722.g9855ee24e9-goog
> >
> >
>
Powered by blists - more mailing lists