[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y/74WA6JLPWAZ/G6@localhost>
Date: Wed, 1 Mar 2023 07:01:44 +0000
From: Hyeonggon Yoo <42.hyeyoo@...il.com>
To: Suren Baghdasaryan <surenb@...gle.com>
Cc: akpm@...ux-foundation.org, michel@...pinasse.org,
jglisse@...gle.com, mhocko@...e.com, vbabka@...e.cz,
hannes@...xchg.org, mgorman@...hsingularity.net, dave@...olabs.net,
willy@...radead.org, liam.howlett@...cle.com, peterz@...radead.org,
ldufour@...ux.ibm.com, paulmck@...nel.org, mingo@...hat.com,
will@...nel.org, luto@...nel.org, songliubraving@...com,
peterx@...hat.com, david@...hat.com, dhowells@...hat.com,
hughd@...gle.com, bigeasy@...utronix.de, kent.overstreet@...ux.dev,
punit.agrawal@...edance.com, lstoakes@...il.com,
peterjung1337@...il.com, rientjes@...gle.com, chriscli@...gle.com,
axelrasmussen@...gle.com, joelaf@...gle.com, minchan@...gle.com,
rppt@...nel.org, jannh@...gle.com, shakeelb@...gle.com,
tatashin@...gle.com, edumazet@...gle.com, gthelen@...gle.com,
gurua@...gle.com, arjunroy@...gle.com, soheil@...gle.com,
leewalsh@...gle.com, posk@...gle.com,
michalechner92@...glemail.com, linux-mm@...ck.org,
linux-arm-kernel@...ts.infradead.org,
linuxppc-dev@...ts.ozlabs.org, x86@...nel.org,
linux-kernel@...r.kernel.org, kernel-team@...roid.com,
Laurent Dufour <laurent.dufour@...ibm.com>
Subject: Re: [PATCH v4 17/33] mm/mremap: write-lock VMA while remapping it to
a new address range
On Mon, Feb 27, 2023 at 09:36:16AM -0800, Suren Baghdasaryan wrote:
> Write-lock VMA as locked before copying it and when copy_vma produces
> a new VMA.
>
> Signed-off-by: Suren Baghdasaryan <surenb@...gle.com>
> Reviewed-by: Laurent Dufour <laurent.dufour@...ibm.com>
> ---
> mm/mmap.c | 1 +
> mm/mremap.c | 1 +
> 2 files changed, 2 insertions(+)
>
> diff --git a/mm/mmap.c b/mm/mmap.c
> index e73fbb84ce12..1f42b9a52b9b 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -3189,6 +3189,7 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
> get_file(new_vma->vm_file);
> if (new_vma->vm_ops && new_vma->vm_ops->open)
> new_vma->vm_ops->open(new_vma);
> + vma_start_write(new_vma);
Oh, it's to prevent handling page faults during move_page_tables().
> if (vma_link(mm, new_vma))
> goto out_vma_link;
> *need_rmap_locks = false;
> diff --git a/mm/mremap.c b/mm/mremap.c
> index 1ddf7beb62e9..327c38eb132e 100644
> --- a/mm/mremap.c
> +++ b/mm/mremap.c
> @@ -623,6 +623,7 @@ static unsigned long move_vma(struct vm_area_struct *vma,
> return -ENOMEM;
> }
>
> + vma_start_write(vma);
> new_pgoff = vma->vm_pgoff + ((old_addr - vma->vm_start) >> PAGE_SHIFT);
> new_vma = copy_vma(&vma, new_addr, new_len, new_pgoff,
> &need_rmap_locks);
> --
> 2.39.2.722.g9855ee24e9-goog
Looks good to me.
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@...il.com>
>
>
Powered by blists - more mailing lists