[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241219211552.1450226-1-audra@redhat.com>
Date: Thu, 19 Dec 2024 16:15:52 -0500
From: Audra Mitchell <audra@...hat.com>
To: linux-mm@...ck.org
Cc: audra@...hat.com,
raquini@...hat.com,
aris@...hat.com,
akpm@...ux-foundation.org,
willy@...radead.org,
william.kucharski@...cle.com,
linux-kernel@...r.kernel.org
Subject: [PATCH] mm: Stop PMD alignment for PIE shared objects
After commit 1854bc6e2420 ("mm/readahead: Align file mappings for non-DAX")
any request through thp_get_unmapped_area would align to a PMD_SIZE,
causing shared objects to have less randomization than previously (9 less
bits for 2MB PMDs). As these lower 9 bits are the most impactful for
ASLR, this change could be argued to have an impact on security.
Running the pie-so program [1] multiple times we can see the randomization
loss as the lower address bits get aligned to a 2MB size on x86 (pie-so):
# ./all-gather && ./all-bits
---------[SNIP]---------
aslr heap 19 bits
aslr exec 00 bits
aslr mmap 29 bits
aslr so 00 bits
aslr stack 31 bits
aslr pie-exec 30 bits
aslr pie-heap 30 bits
aslr pie-so 20 bits
aslr pie-mmap 29 bits
aslr pie-stack 30 bits
Fix this issue by checking that the request is aligned to the PMD_SIZE,
otherwise fall back to mm_get_unmapped_area_vmflags().
[1] https://github.com/stevegrubb/distro-elf-inspector
Fixes: 1854bc6e2420 ("mm/readahead: Align file mappings for non-DAX")
Signed-off-by: Audra Mitchell <audra@...hat.com>
---
mm/huge_memory.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index ee335d96fc39..696caf6cbf4a 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1101,6 +1101,9 @@ static unsigned long __thp_get_unmapped_area(struct file *filp,
if (len_pad < len || (off + len_pad) < off)
return 0;
+ if (!IS_ALIGNED(len, PMD_SIZE))
+ return 0;
+
ret = mm_get_unmapped_area_vmflags(current->mm, filp, addr, len_pad,
off >> PAGE_SHIFT, flags, vm_flags);
--
2.45.0
Powered by blists - more mailing lists