[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190430113611.059391513@linuxfoundation.org>
Date: Tue, 30 Apr 2019 13:38:12 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Will Deacon <will.deacon@....com>,
Bjorn Andersson <bjorn.andersson@...aro.org>,
Catalin Marinas <catalin.marinas@....com>
Subject: [PATCH 5.0 21/89] arm64: mm: Ensure tail of unaligned initrd is reserved
From: Bjorn Andersson <bjorn.andersson@...aro.org>
commit d4d18e3ec6091843f607e8929a56723e28f393a6 upstream.
In the event that the start address of the initrd is not aligned, but
has an aligned size, the base + size will not cover the entire initrd
image and there is a chance that the kernel will corrupt the tail of the
image.
By aligning the end of the initrd to a page boundary and then
subtracting the adjusted start address the memblock reservation will
cover all pages that contains the initrd.
Fixes: c756c592e442 ("arm64: Utilize phys_initrd_start/phys_initrd_size")
Cc: stable@...r.kernel.org
Acked-by: Will Deacon <will.deacon@....com>
Signed-off-by: Bjorn Andersson <bjorn.andersson@...aro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@....com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
arch/arm64/mm/init.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -406,7 +406,7 @@ void __init arm64_memblock_init(void)
* Otherwise, this is a no-op
*/
u64 base = phys_initrd_start & PAGE_MASK;
- u64 size = PAGE_ALIGN(phys_initrd_size);
+ u64 size = PAGE_ALIGN(phys_initrd_start + phys_initrd_size) - base;
/*
* We can only add back the initrd memory if we don't end up
Powered by blists - more mailing lists