lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 28 Jun 2012 15:08:39 +0900
From:	"Kim, Jong-Sung" <neidhard.kim@....com>
To:	"'Russell King - ARM Linux'" <linux@....linux.org.uk>
Cc:	"'Minchan Kim'" <minchan@...nel.org>,
	"'Nicolas Pitre'" <nico@...aro.org>,
	"'Catalin Marinas'" <catalin.marinas@....com>,
	<linux-arm-kernel@...ts.infradead.org>,
	<linux-kernel@...r.kernel.org>,
	"'Chanho Min'" <chanho.min@....com>, <linux-mm@...ck.org>
Subject: RE: [PATCH] [RESEND] arm: limit memblock base address for	early_pte_alloc

> From: Russell King - ARM Linux [mailto:linux@....linux.org.uk]
> Sent: Thursday, June 28, 2012 4:18 AM
> On Fri, Jun 08, 2012 at 10:58:50PM +0900, Kim, Jong-Sung wrote:
> >
> > May I suggest another simple approach? The first continuous couples of
> > sections are always safely section-mapped inside alloc_init_section
> funtion.
> > So, by limiting memblock_alloc to the end of the first continuous
> > couples of sections at the start of map_lowmem, map_lowmem can safely
> > memblock_alloc & memset even if we have one or more section-unaligned
> > memory regions. The limit can be extended back to arm_lowmem_limit after
> the map_lowmem is done.
> 
> No.  What if the first block of memory is not large enough to handle all
the
> allocations?
> 
Thank you for your comment, Russell. I sent a modified patch not to limit to
the first memory memblock_region as a reply to Dave's message.

> I think the real problem is folk trying to reserve small amounts.  I have
> said all reservations must be aligned to 1MB.
>
Ok, now I know your thought about arm_memblock_steal(). Then, how about
adding a simple aligning to prevent the possible problem just like me:

diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index f54d592..d0daf0d 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -324,6 +324,8 @@ phys_addr_t __init arm_memblock_steal(phys_addr_t size,
phys
 
        BUG_ON(!arm_memblock_steal_permitted);
 
+       size = ALIGN(size, SECTION_SIZE);
+
        phys = memblock_alloc(size, align);
        memblock_free(phys, size);
        memblock_remove(phys, size);

or, leaving a few comments about the restriction kindly..?



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ