lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <initrd-reserve-reply2@mdm.bga.com>
Date:	Sun, 22 May 2011 16:17:18 -0500
From:	'Milton Miller' <miltonm@....com>
To:	Dave Carroll <dcarroll@...ekcorp.com>
CC:	Paul Mackerras <paulus@...ba.org>,
	LPPC <linuxppc-dev@...ts.ozlabs.org>,
	LKML <linux-kernel@...r.kernel.org>,
	"Benjamin Herrenschmidt" <benh@...nel.crashing.org>
Subject: Re: [PATCH v3] powerpc: Force page alignment for initrd reserved memory

On Sat, 21 May 2011 about 11:05:27 -0600, Dave Carroll wrote:
> 
> When using 64K pages with a separate cpio rootfs, U-Boot will align
> the rootfs on a 4K page boundary. When the memory is reserved, and
> subsequent early memblock_alloc is called, it will allocate memory
> between the 64K page alignment and reserved memory. When the reserved
> memory is subsequently freed, it is done so by pages, causing the
> early memblock_alloc requests to be re-used, which in my case, caused
> the device-tree to be clobbered.
> 
> This patch forces the reserved memory for initrd to be kernel page
> aligned, and adds the same range extension when freeing initrd.

Getting better, but

> 
> 
> Signed-off-by: Dave Carroll <dcarroll@...ekcorp.com>
> ---
>  arch/powerpc/kernel/prom.c |    4 +++-
>  arch/powerpc/mm/init_32.c  |    3 +++
>  arch/powerpc/mm/init_64.c  |    3 +++
>  3 files changed, 9 insertions(+), 1 deletions(-)
> 
> diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
> index 48aeb55..397d4a0 100644
> --- a/arch/powerpc/kernel/prom.c
> +++ b/arch/powerpc/kernel/prom.c
> @@ -555,7 +555,9 @@ static void __init early_reserve_mem(void)
>  #ifdef CONFIG_BLK_DEV_INITRD
>         /* then reserve the initrd, if any */
>         if (initrd_start && (initrd_end > initrd_start))

Here you test the unaligned values

>  void free_initrd_mem(unsigned long start, unsigned long end)
>  {
> +       start = _ALIGN_DOWN(start, PAGE_SIZE);
> +       end = _ALIGN_UP(end, PAGE_SIZE);
> +
>         if (start < end)
>                 printk ("Freeing initrd memory: %ldk freed\n", (end - start) >> 10);

But here you test the aligned values.  And they are aligned with
opposite bias.  Which means that if start == end (or is less than,
but within the same page), a page that wasn't reserved (same
32 and 64 bit) gets freed.

I thought "what happens if we are within a page of end, could we
free the last page of bss?", but then I checked vmlinux.lds and we
align end to page size.  I thought other allocations should be safe,
but then remembered: 

The flattened device tree (of which we continue to use the string
table after boot) could be a problem.

milton

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ