lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1436951580-15977-87-git-send-email-luis.henriques@canonical.com>
Date:	Wed, 15 Jul 2015 10:11:21 +0100
From:	Luis Henriques <luis.henriques@...onical.com>
To:	linux-kernel@...r.kernel.org, stable@...r.kernel.org,
	kernel-team@...ts.ubuntu.com
Cc:	Dave Martin <Dave.Martin@....com>,
	Catalin Marinas <catalin.marinas@....com>,
	Luis Henriques <luis.henriques@...onical.com>
Subject: [PATCH 3.16.y-ckt 086/185] arm64: mm: Fix freeing of the wrong memmap entries with !SPARSEMEM_VMEMMAP

3.16.7-ckt15 -stable review patch.  If anyone has any objections, please let me know.

------------------

From: Dave P Martin <Dave.Martin@....com>

commit b9bcc919931611498e856eae9bf66337330d04cc upstream.

The memmap freeing code in free_unused_memmap() computes the end of
each memblock by adding the memblock size onto the base.  However,
if SPARSEMEM is enabled then the value (start) used for the base
may already have been rounded downwards to work out which memmap
entries to free after the previous memblock.

This may cause memmap entries that are in use to get freed.

In general, you're not likely to hit this problem unless there
are at least 2 memblocks and one of them is not aligned to a
sparsemem section boundary.  Note that carve-outs can increase
the number of memblocks by splitting the regions listed in the
device tree.

This problem doesn't occur with SPARSEMEM_VMEMMAP, because the
vmemmap code deals with freeing the unused regions of the memmap
instead of requiring the arch code to do it.

This patch gets the memblock base out of the memblock directly when
computing the block end address to ensure the correct value is used.

Signed-off-by: Dave Martin <Dave.Martin@....com>
Signed-off-by: Catalin Marinas <catalin.marinas@....com>
Signed-off-by: Luis Henriques <luis.henriques@...onical.com>
---
 arch/arm64/mm/init.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index e90c5426fe14..01bfca35b0aa 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -239,7 +239,7 @@ static void __init free_unused_memmap(void)
 		 * memmap entries are valid from the bank end aligned to
 		 * MAX_ORDER_NR_PAGES.
 		 */
-		prev_end = ALIGN(start + __phys_to_pfn(reg->size),
+		prev_end = ALIGN(__phys_to_pfn(reg->base + reg->size),
 				 MAX_ORDER_NR_PAGES);
 	}
 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ