lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Fri, 13 May 2022 10:44:57 -0000
From:   "tip-bot2 for Adrian-Ken Rueegsegger" <tip-bot2@...utronix.de>
To:     linux-tip-commits@...r.kernel.org
Cc:     "Adrian-Ken Rueegsegger" <ken@...elabs.ch>,
        Thomas Gleixner <tglx@...utronix.de>,
        Oscar Salvador <osalvador@...e.de>,
        David Hildenbrand <david@...hat.com>, stable@...r.kernel.org,
        x86@...nel.org, linux-kernel@...r.kernel.org
Subject: [tip: x86/urgent] x86/mm: Fix marking of unused sub-pmd ranges

The following commit has been merged into the x86/urgent branch of tip:

Commit-ID:     280abe14b6e0a38de9cc86fe6a019523aadd8f70
Gitweb:        https://git.kernel.org/tip/280abe14b6e0a38de9cc86fe6a019523aadd8f70
Author:        Adrian-Ken Rueegsegger <ken@...elabs.ch>
AuthorDate:    Mon, 09 May 2022 11:06:37 +02:00
Committer:     Thomas Gleixner <tglx@...utronix.de>
CommitterDate: Fri, 13 May 2022 12:41:21 +02:00

x86/mm: Fix marking of unused sub-pmd ranges

The unused part precedes the new range spanned by the start, end parameters
of vmemmap_use_new_sub_pmd(). This means it actually goes from
ALIGN_DOWN(start, PMD_SIZE) up to start.

Use the correct address when applying the mark using memset.

Fixes: 8d400913c231 ("x86/vmemmap: handle unpopulated sub-pmd ranges")
Signed-off-by: Adrian-Ken Rueegsegger <ken@...elabs.ch>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Reviewed-by: Oscar Salvador <osalvador@...e.de>
Reviewed-by: David Hildenbrand <david@...hat.com>
Cc: stable@...r.kernel.org
Link: https://lore.kernel.org/r/20220509090637.24152-2-ken@codelabs.ch
---
 arch/x86/mm/init_64.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 96d34eb..e294233 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -902,6 +902,8 @@ static void __meminit vmemmap_use_sub_pmd(unsigned long start, unsigned long end
 
 static void __meminit vmemmap_use_new_sub_pmd(unsigned long start, unsigned long end)
 {
+	const unsigned long page = ALIGN_DOWN(start, PMD_SIZE);
+
 	vmemmap_flush_unused_pmd();
 
 	/*
@@ -914,8 +916,7 @@ static void __meminit vmemmap_use_new_sub_pmd(unsigned long start, unsigned long
 	 * Mark with PAGE_UNUSED the unused parts of the new memmap range
 	 */
 	if (!IS_ALIGNED(start, PMD_SIZE))
-		memset((void *)start, PAGE_UNUSED,
-			start - ALIGN_DOWN(start, PMD_SIZE));
+		memset((void *)page, PAGE_UNUSED, start - page);
 
 	/*
 	 * We want to avoid memset(PAGE_UNUSED) when populating the vmemmap of

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ