[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240717200709.1552558-19-Liam.Howlett@oracle.com>
Date: Wed, 17 Jul 2024 16:07:06 -0400
From: "Liam R. Howlett" <Liam.Howlett@...cle.com>
To: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>
Cc: Suren Baghdasaryan <surenb@...gle.com>, Vlastimil Babka <vbabka@...e.cz>,
Lorenzo Stoakes <lstoakes@...il.com>,
Matthew Wilcox <willy@...radead.org>, sidhartha.kumar@...cle.com,
"Paul E . McKenney" <paulmck@...nel.org>,
Bert Karwatzki <spasswolf@....de>, Jiri Olsa <olsajiri@...il.com>,
linux-kernel@...r.kernel.org, Kees Cook <kees@...nel.org>,
Jeff Xu <jeffxu@...omium.org>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>
Subject: [PATCH v5 18/21] mm/mmap: Move can_modify_mm() check down the stack
From: "Liam R. Howlett" <Liam.Howlett@...cle.com>
Without an arch_unmap() call so high in the call stack, the check for
mseal'ed vmas can be moved lower as well. This has the benefit of only
actually checking if things are msealed when there is anything to check.
That is, we know there is at least one vma that is in the way and needs
to be checked.
Only call the can_modify_mm() in do_vmi_align_munmap() and the MAP_FIXED
case of mmap_region().
Signed-off-by: Liam R. Howlett <Liam.Howlett@...cle.com>
Cc: Jeff Xu <jeffxu@...omium.org>
---
mm/mmap.c | 24 ++++++++----------------
1 file changed, 8 insertions(+), 16 deletions(-)
diff --git a/mm/mmap.c b/mm/mmap.c
index 117e8240f697..a32f545d3987 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2877,6 +2877,10 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
struct vma_munmap_struct vms;
int error;
+ /* Prevent unmapping a sealed VMA. */
+ if (unlikely(!can_modify_mm(mm, start, end)))
+ return -EPERM;
+
init_vma_munmap(&vms, vmi, vma, start, end, uf, unlock);
error = vms_gather_munmap_vmas(&vms, &mas_detach);
if (error)
@@ -2927,13 +2931,6 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm,
if (end == start)
return -EINVAL;
- /*
- * Prevent unmapping a sealed VMA.
- * can_modify_mm assumes we have acquired the lock on MM.
- */
- if (unlikely(!can_modify_mm(mm, start, end)))
- return -EPERM;
-
/* Find the first overlapping VMA */
vma = vma_find(vmi, end);
if (!vma) {
@@ -2991,13 +2988,15 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
if (!may_expand_vm(mm, vm_flags, pglen - nr_pages))
return -ENOMEM;
- if (unlikely(!can_modify_mm(mm, addr, end)))
- return -EPERM;
/* Find the first overlapping VMA */
vma = vma_find(&vmi, end);
init_vma_munmap(&vms, &vmi, vma, addr, end, uf, /* unlock = */ false);
if (vma) {
+ /* Prevent unmapping a sealed VMA. */
+ if (unlikely(!can_modify_mm(mm, addr, end)))
+ return -EPERM;
+
mt_init_flags(&mt_detach, vmi.mas.tree->ma_flags & MT_FLAGS_LOCK_MASK);
mt_on_stack(mt_detach);
mas_init(&mas_detach, &mt_detach, /* addr = */ 0);
@@ -3370,13 +3369,6 @@ int do_vma_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
{
struct mm_struct *mm = vma->vm_mm;
- /*
- * Prevent unmapping a sealed VMA.
- * can_modify_mm assumes we have acquired the lock on MM.
- */
- if (unlikely(!can_modify_mm(mm, start, end)))
- return -EPERM;
-
return do_vmi_align_munmap(vmi, vma, mm, start, end, uf, unlock);
}
--
2.43.0
Powered by blists - more mailing lists