[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080423124619.GA9092@csn.ul.ie>
Date: Wed, 23 Apr 2008 13:46:20 +0100
From: Mel Gorman <mel@....ul.ie>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc: akpm@...ux-foundation.org, Hugh Dickins <hugh@...itas.com>,
Shi Weihua <shiwh@...fujitsu.com>, balbir@...ux.vnet.ibm.com,
xemul@...nvz.org, linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [BUGFIX][PATCH] Fix usemap initialization v3
On (23/04/08 13:46), KAMEZAWA Hiroyuki didst pronounce:
> fixed typos.
> ==
> usemap must be initialized only when pfn is within zone.
> If not, it corrupts memory.
>
> And this patch also reduces the number of calls to set_pageblock_migratetype()
> from
> (pfn & (pageblock_nr_pages -1)
> to
> !(pfn & (pageblock_nr_pages-1)
> it should be called once per pageblock.
>
Nicely spotted.
> Changelog.
> v2->v3
> - Fixed typos.
> v1->v2
> - Fixed boundary check.
> - Move calculation of pointer for zone struct to out of loop.
>
>
> Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
>
> ---
> mm/page_alloc.c | 14 ++++++++++++--
> 1 file changed, 12 insertions(+), 2 deletions(-)
>
> Index: linux-2.6.25/mm/page_alloc.c
> ===================================================================
> --- linux-2.6.25.orig/mm/page_alloc.c
> +++ linux-2.6.25/mm/page_alloc.c
> @@ -2518,7 +2518,9 @@ void __meminit memmap_init_zone(unsigned
> struct page *page;
> unsigned long end_pfn = start_pfn + size;
> unsigned long pfn;
> + struct zone *z;
>
> + z = &NODE_DATA(nid)->node_zones[zone];
> for (pfn = start_pfn; pfn < end_pfn; pfn++) {
Ok, this is fine. zone being an index instead of a struct zone is a
little confusing but it's not your fault.
> /*
> * There can be holes in boot-time mem_map[]s
> @@ -2536,7 +2538,6 @@ void __meminit memmap_init_zone(unsigned
> init_page_count(page);
> reset_page_mapcount(page);
> SetPageReserved(page);
> -
> /*
> * Mark the block movable so that blocks are reserved for
> * movable at startup. This will force kernel allocations
Spurious whitespace change there.
> @@ -2545,8 +2546,15 @@ void __meminit memmap_init_zone(unsigned
> * kernel allocations are made. Later some blocks near
> * the start are marked MIGRATE_RESERVE by
> * setup_zone_migrate_reserve()
> + *
> + * bitmap is created for zone's valid pfn range. but memmap
> + * can be created for invalid pages (for alignment)
> + * check here not to call set_pageblock_migratetype() against
> + * pfn out of zone.
> */
> - if ((pfn & (pageblock_nr_pages-1)))
> + if ((z->zone_start_pfn <= pfn)
> + && (pfn < z->zone_start_pfn + z->spanned_pages)
> + && !(pfn & (pageblock_nr_pages - 1)))
> set_pageblock_migratetype(page, MIGRATE_MOVABLE);
>
This looks correct. The boundary check is definitly correct and
set_pageblock_migratetype is now only getting called once per pageblock.
> INIT_LIST_HEAD(&page->lru);
> @@ -4460,6 +4468,8 @@ void set_pageblock_flags_group(struct pa
> pfn = page_to_pfn(page);
> bitmap = get_pageblock_bitmap(zone, pfn);
> bitidx = pfn_to_bitidx(zone, pfn);
> + VM_BUG_ON(pfn < zone->zone_start_pfn);
> + VM_BUG_ON(pfn >= zone->zone_start_pfn + zone->spanned_pages);
>
Looks good, it would have caught this particular error earlier.
> for (; start_bitidx <= end_bitidx; start_bitidx++, value <<= 1)
> if (flags & value)
>
Seems find and boots successfully on a number of machines.
Thanks
Acked-by: Mel Gorman <mel@....ul.ie>
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists