lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240118133504.2910955-1-shy828301@gmail.com>
Date: Thu, 18 Jan 2024 05:35:04 -0800
From: Yang Shi <shy828301@...il.com>
To: jirislaby@...nel.org,
	surenb@...gle.com,
	riel@...riel.com,
	willy@...radead.org,
	cl@...ux.com,
	akpm@...ux-foundation.org
Cc: yang@...amperecomputing.com,
	linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: [PATCH] mm: huge_memory: don't force huge page alignment on 32 bit

From: Yang Shi <yang@...amperecomputing.com>

The commit efa7df3e3bb5 ("mm: align larger anonymous mappings on THP
boundaries") caused two issues [1] [2] reported on 32 bit system or compat
userspace.

It doesn't make too much sense to force huge page alignment on 32 bit
system due to the constrained virtual address space.

[1] https://lore.kernel.org/linux-mm/CAHbLzkqa1SCBA10yjWTtA2mKCsoK5+M1BthSDL8ROvUq2XxZMw@mail.gmail.com/T/#mf211643a0427f8d6495b5b53f8132f453d60ab95
[2] https://lore.kernel.org/linux-mm/CAHbLzkqa1SCBA10yjWTtA2mKCsoK5+M1BthSDL8ROvUq2XxZMw@mail.gmail.com/T/#me93dff2ccbd9902c3e395e1c022fb454e48ecb1d

Fixes: efa7df3e3bb5 ("mm: align larger anonymous mappings on THP boundaries")
Reported-by: Jiri Slaby <jirislaby@...nel.org>
Reported-by: Suren Baghdasaryan <surenb@...gle.com>
Tested-by: Jiri Slaby <jirislaby@...nel.org>
Tested-by: Suren Baghdasaryan <surenb@...gle.com>
Cc: Rik van Riel <riel@...riel.com>
Cc: Matthew Wilcox <willy@...radead.org>
Cc: Christopher Lameter <cl@...ux.com>
Signed-off-by: Yang Shi <yang@...amperecomputing.com>
---
 mm/huge_memory.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 94ef5c02b459..e9fbaccbe0c0 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -37,6 +37,7 @@
 #include <linux/page_owner.h>
 #include <linux/sched/sysctl.h>
 #include <linux/memory-tiers.h>
+#include <linux/compat.h>
 
 #include <asm/tlb.h>
 #include <asm/pgalloc.h>
@@ -811,6 +812,14 @@ static unsigned long __thp_get_unmapped_area(struct file *filp,
 	loff_t off_align = round_up(off, size);
 	unsigned long len_pad, ret;
 
+	/*
+	 * It doesn't make too much sense to froce huge page alignment on
+	 * 32 bit system or compat userspace due to the contrained virtual
+	 * address space and address entropy.
+	 */
+	if (IS_ENABLED(CONFIG_32BIT) || in_compat_syscall())
+		return 0;
+
 	if (off_end <= off_align || (off_end - off_align) < size)
 		return 0;
 
-- 
2.41.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ