[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230803172652.2849981-6-surenb@google.com>
Date: Thu, 3 Aug 2023 10:26:50 -0700
From: Suren Baghdasaryan <surenb@...gle.com>
To: akpm@...ux-foundation.org
Cc: torvalds@...ux-foundation.org, jannh@...gle.com,
willy@...radead.org, liam.howlett@...cle.com, david@...hat.com,
peterx@...hat.com, ldufour@...ux.ibm.com, vbabka@...e.cz,
michel@...pinasse.org, jglisse@...gle.com, mhocko@...e.com,
hannes@...xchg.org, dave@...olabs.net, hughd@...gle.com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
stable@...r.kernel.org, Suren Baghdasaryan <surenb@...gle.com>,
"Liam R . Howlett" <Liam.Howlett@...cle.com>
Subject: [PATCH v3 5/6] mm: always lock new vma before inserting into vma tree
While it's not strictly necessary to lock a newly created vma before
adding it into the vma tree (as long as no further changes are performed
to it), it seems like a good policy to lock it and prevent accidental
changes after it becomes visible to the page faults. Lock the vma before
adding it into the vma tree.
Suggested-by: Jann Horn <jannh@...gle.com>
Signed-off-by: Suren Baghdasaryan <surenb@...gle.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@...cle.com>
---
mm/mmap.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/mm/mmap.c b/mm/mmap.c
index 3937479d0e07..850a39dee075 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -412,6 +412,8 @@ static int vma_link(struct mm_struct *mm, struct vm_area_struct *vma)
if (vma_iter_prealloc(&vmi))
return -ENOMEM;
+ vma_start_write(vma);
+
if (vma->vm_file) {
mapping = vma->vm_file->f_mapping;
i_mmap_lock_write(mapping);
@@ -477,7 +479,8 @@ static inline void vma_prepare(struct vma_prepare *vp)
vma_start_write(vp->vma);
if (vp->adj_next)
vma_start_write(vp->adj_next);
- /* vp->insert is always a newly created VMA, no need for locking */
+ if (vp->insert)
+ vma_start_write(vp->insert);
if (vp->remove)
vma_start_write(vp->remove);
if (vp->remove2)
@@ -3098,6 +3101,7 @@ static int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma,
vma->vm_pgoff = addr >> PAGE_SHIFT;
vm_flags_init(vma, flags);
vma->vm_page_prot = vm_get_page_prot(flags);
+ vma_start_write(vma);
if (vma_iter_store_gfp(vmi, vma, GFP_KERNEL))
goto mas_store_fail;
@@ -3345,7 +3349,6 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
get_file(new_vma->vm_file);
if (new_vma->vm_ops && new_vma->vm_ops->open)
new_vma->vm_ops->open(new_vma);
- vma_start_write(new_vma);
if (vma_link(mm, new_vma))
goto out_vma_link;
*need_rmap_locks = false;
--
2.41.0.585.gd2178a4bd4-goog
Powered by blists - more mailing lists