[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120621201728.GB4642@google.com>
Date: Thu, 21 Jun 2012 13:17:28 -0700
From: Tejun Heo <tj@...nel.org>
To: Yinghai Lu <yinghai@...nel.org>
Cc: Gavin Shan <shangw@...ux.vnet.ibm.com>,
Sasha Levin <levinsasha928@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
David Miller <davem@...emloft.net>, hpa@...ux.intel.com,
linux-mm <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: Early boot panic on machine with lots of memory
Hello, Yinghai.
On Tue, Jun 19, 2012 at 07:57:45PM -0700, Yinghai Lu wrote:
> if it is that case, that change could fix other problem problem too.
> --- during the one free reserved.regions could double the array.
Yeah, that sounds much more attractive to me too. Some comments on
the patch tho.
> /**
> * memblock_double_array - double the size of the memblock regions array
> * @type: memblock type of the regions array being doubled
> @@ -216,7 +204,7 @@ static int __init_memblock memblock_doub
>
> /* Calculate new doubled size */
> old_size = type->max * sizeof(struct memblock_region);
> - new_size = old_size << 1;
> + new_size = PAGE_ALIGN(old_size << 1);
We definintely can use some comments explaining why we want page
alignment. It's kinda subtle.
This is a bit confusing here because old_size is the proper size
without padding while new_size is page aligned size with possible
padding. Maybe discerning {old|new}_alloc_size is clearer? Also, I
think adding @new_cnt variable which is calculated together would make
the code easier to follow. So, sth like,
/* explain why page aligning is necessary */
old_size = type->max * sizeof(struct memblock_region);
old_alloc_size = PAGE_ALIGN(old_size);
new_max = type->max << 1;
new_size = new_max * sizeof(struct memblock_region);
new_alloc_size = PAGE_ALIGN(new_size);
and use alloc_sizes for alloc/frees and sizes for everything else.
> unsigned long __init free_low_memory_core_early(int nodeid)
> {
> unsigned long count = 0;
> - phys_addr_t start, end;
> + phys_addr_t start, end, size;
> u64 i;
>
> - /* free reserved array temporarily so that it's treated as free area */
> - memblock_free_reserved_regions();
> + for_each_free_mem_range(i, MAX_NUMNODES, &start, &end, NULL)
> + count += __free_memory_core(start, end);
>
> - for_each_free_mem_range(i, MAX_NUMNODES, &start, &end, NULL) {
> - unsigned long start_pfn = PFN_UP(start);
> - unsigned long end_pfn = min_t(unsigned long,
> - PFN_DOWN(end), max_low_pfn);
> - if (start_pfn < end_pfn) {
> - __free_pages_memory(start_pfn, end_pfn);
> - count += end_pfn - start_pfn;
> - }
> - }
> + /* free range that is used for reserved array if we allocate it */
> + size = get_allocated_memblock_reserved_regions_info(&start);
> + if (size)
> + count += __free_memory_core(start, start + size);
I'm afraid this is too early. We don't want the region to be unmapped
yet. This should only happen after all memblock usages are finished
which I don't think is the case yet.
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists