lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 21 Apr 2008 12:56:04 +0100 (BST)
From:	Hugh Dickins <hugh@...itas.com>
To:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
cc:	Mel Gorman <mel@....ul.ie>, Shi Weihua <shiwh@...fujitsu.com>,
	akpm@...ux-foundation.org, balbir@...ux.vnet.ibm.com,
	xemul@...nvz.org, linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH]Fix usemap for DISCONTIG/FLATMEM with not-aligned zone
 initilaization.

On Mon, 21 Apr 2008, KAMEZAWA Hiroyuki wrote:
> usemap must be initialized only when pfn is within zone.
> If not, it corrupts memory.
> 
> After intialization, usemap is used for only pfn in valid range.
> (We have to init memmap even in invalid range.)
> 
> Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>

Not something I know enough about to ACK, but this does look
easier than your earlier one (and even if Mel's had fixed it,
though it may be good for 2.6.26, it might not be for stable).

A few doubts below...

> 
> ---
>  mm/page_alloc.c |    6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> Index: linux-2.6.25/mm/page_alloc.c
> ===================================================================
> --- linux-2.6.25.orig/mm/page_alloc.c
> +++ linux-2.6.25/mm/page_alloc.c
> @@ -2518,6 +2518,7 @@ void __meminit memmap_init_zone(unsigned
>  	struct page *page;
>  	unsigned long end_pfn = start_pfn + size;
>  	unsigned long pfn;
> +	struct zone *z;
>  
>  	for (pfn = start_pfn; pfn < end_pfn; pfn++) {
>  		/*
> @@ -2536,7 +2537,7 @@ void __meminit memmap_init_zone(unsigned
>  		init_page_count(page);
>  		reset_page_mapcount(page);
>  		SetPageReserved(page);
> -
> +		z = page_zone(page);

Does this have to be recalculated for every page?  The function name
"memmap_init_zone" suggests it could be done just once (but I'm on
unfamiliar territory here, ignore any nonsense from me).

>  		/*
>  		 * Mark the block movable so that blocks are reserved for
>  		 * movable at startup. This will force kernel allocations
> @@ -2546,7 +2547,9 @@ void __meminit memmap_init_zone(unsigned
>  		 * the start are marked MIGRATE_RESERVE by
>  		 * setup_zone_migrate_reserve()
>  		 */
> -		if ((pfn & (pageblock_nr_pages-1)))
> +		if ((z->zone_start_pfn < pfn)

Shouldn't that be <= ?

> +		    && (pfn < z->zone_start_pfn + z->spanned_pages)
> +		    && !(pfn & (pageblock_nr_pages-1)))

Ah, that line (with the ! in) makes more sense than what was there
before; but that's an unrelated (minor) bugfix which you ought to
mention separately in the change comment.

Hugh

>  			set_pageblock_migratetype(page, MIGRATE_MOVABLE);
>  
>  		INIT_LIST_HEAD(&page->lru);
> @@ -4460,6 +4463,8 @@ void set_pageblock_flags_group(struct pa
>  	pfn = page_to_pfn(page);
>  	bitmap = get_pageblock_bitmap(zone, pfn);
>  	bitidx = pfn_to_bitidx(zone, pfn);
> +	VM_BUG_ON(pfn < zone->zone_start_pfn);
> +	VM_BUG_ON(pfn >= zone->zone_start_pfn + zone->spanned_pages);
>  
>  	for (; start_bitidx <= end_bitidx; start_bitidx++, value <<= 1)
>  		if (flags & value)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ