lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1436370037-25874-11-git-send-email-kamal@canonical.com>
Date:	Wed,  8 Jul 2015 08:39:51 -0700
From:	Kamal Mostafa <kamal@...onical.com>
To:	linux-kernel@...r.kernel.org, stable@...r.kernel.org,
	kernel-team@...ts.ubuntu.com
Cc:	Mark Rutland <mark.rutland@....com>,
	Catalin Marinas <catalin.marinas@....com>,
	Steve Capper <steve.capper@...aro.org>,
	Russell King <rmk+kernel@....linux.org.uk>,
	Kamal Mostafa <kamal@...onical.com>
Subject: [PATCH 3.13.y-ckt 10/56] ARM: 8356/1: mm: handle non-pmd-aligned end of RAM

3.13.11-ckt23 -stable review patch.  If anyone has any objections, please let me know.

------------------

From: Mark Rutland <mark.rutland@....com>

commit 965278dcb8ab0b1f666cc47937933c4be4aea48d upstream.

At boot time we round the memblock limit down to section size in an
attempt to ensure that we will have mapped this RAM with section
mappings prior to allocating from it. When mapping RAM we iterate over
PMD-sized chunks, creating these section mappings.

Section mappings are only created when the end of a chunk is aligned to
section size. Unfortunately, with classic page tables (where PMD_SIZE is
2 * SECTION_SIZE) this means that if a chunk is between 1M and 2M in
size the first 1M will not be mapped despite having been accounted for
in the memblock limit. This has been observed to result in page tables
being allocated from unmapped memory, causing boot-time hangs.

This patch modifies the memblock limit rounding to always round down to
PMD_SIZE instead of SECTION_SIZE. For classic MMU this means that we
will round the memblock limit down to a 2M boundary, matching the limits
on section mappings, and preventing allocations from unmapped memory.
For LPAE there should be no change as PMD_SIZE == SECTION_SIZE.

Signed-off-by: Mark Rutland <mark.rutland@....com>
Reported-by: Stefan Agner <stefan@...er.ch>
Tested-by: Stefan Agner <stefan@...er.ch>
Acked-by: Laura Abbott <labbott@...hat.com>
Tested-by: Hans de Goede <hdegoede@...hat.com>
Cc: Catalin Marinas <catalin.marinas@....com>
Cc: Steve Capper <steve.capper@...aro.org>
Signed-off-by: Russell King <rmk+kernel@....linux.org.uk>
[ kamal: backport to 3.13-stable: block_start, block_end rename ]
Signed-off-by: Kamal Mostafa <kamal@...onical.com>
---
 arch/arm/mm/mmu.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index b8c90c5..7425ed6 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -1077,22 +1077,22 @@ void __init sanity_check_meminfo(void)
 				arm_lowmem_limit = bank_end;
 
 			/*
-			 * Find the first non-section-aligned page, and point
+			 * Find the first non-pmd-aligned page, and point
 			 * memblock_limit at it. This relies on rounding the
-			 * limit down to be section-aligned, which happens at
-			 * the end of this function.
+			 * limit down to be pmd-aligned, which happens at the
+			 * end of this function.
 			 *
 			 * With this algorithm, the start or end of almost any
-			 * bank can be non-section-aligned. The only exception
-			 * is that the start of the bank 0 must be section-
+			 * bank can be non-pmd-aligned. The only exception is
+			 * that the start of the bank 0 must be section-
 			 * aligned, since otherwise memory would need to be
 			 * allocated when mapping the start of bank 0, which
 			 * occurs before any free memory is mapped.
 			 */
 			if (!memblock_limit) {
-				if (!IS_ALIGNED(bank->start, SECTION_SIZE))
+				if (!IS_ALIGNED(bank->start, PMD_SIZE))
 					memblock_limit = bank->start;
-				else if (!IS_ALIGNED(bank_end, SECTION_SIZE))
+				else if (!IS_ALIGNED(bank_end, PMD_SIZE))
 					memblock_limit = bank_end;
 			}
 		}
@@ -1122,12 +1122,12 @@ void __init sanity_check_meminfo(void)
 	high_memory = __va(arm_lowmem_limit - 1) + 1;
 
 	/*
-	 * Round the memblock limit down to a section size.  This
+	 * Round the memblock limit down to a pmd size.  This
 	 * helps to ensure that we will allocate memory from the
-	 * last full section, which should be mapped.
+	 * last full pmd, which should be mapped.
 	 */
 	if (memblock_limit)
-		memblock_limit = round_down(memblock_limit, SECTION_SIZE);
+		memblock_limit = round_down(memblock_limit, PMD_SIZE);
 	if (!memblock_limit)
 		memblock_limit = arm_lowmem_limit;
 
-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ