lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250123205849.793810-1-Liam.Howlett@oracle.com>
Date: Thu, 23 Jan 2025 15:58:49 -0500
From: "Liam R. Howlett" <Liam.Howlett@...cle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: maple-tree@...ts.infradead.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org,
        "Liam R. Howlett" <Liam.Howlett@...cle.com>,
        Jann Horn <jannh@...gle.com>,
        Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
        Peter Zijlstra <peterz@...radead.org>, Michal Hocko <mhocko@...e.com>,
        Peng Zhang <zhangpeng.00@...edance.com>
Subject: [PATCH] kernel/fork: Be more careful about dup_mmap() failures

From: "Liam R. Howlett" <Liam.Howlett@...cle.com>

In the even that there is a failure during dup_mmap(), the maple tree
can be left in an unsafe state for other iterators besides the exit
path.

The unsafe state is created after the tree is cloned, but before the
vmas are replaced; if a vma allocation fails (for instance), then the
tree will have a marker (XA_ZERO_ENTRY) to denote where to stop
destroying vmas on the exit path.  This marker replaces a vma in the
tree and may be treated as a pointer to a vma in iterators besides the
special case exit_mmap() iterator.

All the locks are dropped before the exit_mmap() call, but the
incomplete mm_struct can be reached through (at least) the rmap finding
the vmas which have a pointer back to the mm_struct.

Up to this point, there have been no issues with being able to find an
mm_sturct that was only partially initialised.  Syzbot was able to make
the incomplete mm_struct fail with recent forking changes, so it has
been proven unsafe to use the mm_sturct that hasn't been initialised, as
referenced in the link below.

Although 8ac662f5da19f ("fork: avoid inappropriate uprobe access to
invalid mm") fixed the uprobe access, it does not completely remove the
race.

This patch sets the MMF_OOM_SKIP to avoid the iteration of the vmas on
the oom side (even though this is extremely unlikely to be selected as
an oom victim in the race window), and sets MMF_UNSTABLE to avoid other
potential users from using a partially initialised mm_struct.

Link: https://lore.kernel.org/all/6756d273.050a0220.2477f.003d.GAE@google.com/
Fixes: d240629148377 ("fork: use __mt_dup() to duplicate maple tree in dup_mmap()")
Cc: Jann Horn <jannh@...gle.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Michal Hocko <mhocko@...e.com>
Cc: Peng Zhang <zhangpeng.00@...edance.com>
Signed-off-by: Liam R. Howlett <Liam.Howlett@...cle.com>
---
 kernel/fork.c | 17 ++++++++++++++---
 1 file changed, 14 insertions(+), 3 deletions(-)

diff --git a/kernel/fork.c b/kernel/fork.c
index ded49f18cd95c..20b2120f019ca 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -760,7 +760,8 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
 		mt_set_in_rcu(vmi.mas.tree);
 		ksm_fork(mm, oldmm);
 		khugepaged_fork(mm, oldmm);
-	} else if (mpnt) {
+	} else {
+
 		/*
 		 * The entire maple tree has already been duplicated. If the
 		 * mmap duplication fails, mark the failure point with
@@ -768,8 +769,18 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
 		 * stop releasing VMAs that have not been duplicated after this
 		 * point.
 		 */
-		mas_set_range(&vmi.mas, mpnt->vm_start, mpnt->vm_end - 1);
-		mas_store(&vmi.mas, XA_ZERO_ENTRY);
+		if (mpnt) {
+			mas_set_range(&vmi.mas, mpnt->vm_start, mpnt->vm_end - 1);
+			mas_store(&vmi.mas, XA_ZERO_ENTRY);
+			/* Avoid OOM iterating a broken tree */
+			set_bit(MMF_OOM_SKIP, &mm->flags);
+		}
+		/*
+		 * The mm_struct is going to exit, but the locks will be dropped
+		 * first.  Set the mm_struct as unstable is advisable as it is
+		 * not fully initialised.
+		 */
+		set_bit(MMF_UNSTABLE, &mm->flags);
 	}
 out:
 	mmap_write_unlock(mm);
-- 
2.43.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ