[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <4CC753AD.1090403@goop.org>
Date: Tue, 26 Oct 2010 15:18:21 -0700
From: Jeremy Fitzhardinge <jeremy@...p.org>
To: Yinghai Lu <yinghai@...nel.org>
CC: "H. Peter Anvin" <hpa@...or.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
Subject: early_node_mem()'s memory allocation policy
We're seeing problems under Xen where large portions of the memory
could be reserved (because they're not yet physically present, even
though the appear in E820), and the 'start' and 'end' early_node_mem()
is choosing is entirely within that reserved range.
Also, the code seems dubious because it adjusts start and end without
regarding how much space it is trying to allocate:
/* extend the search scope */
end = max_pfn_mapped << PAGE_SHIFT;
if (end > (MAX_DMA32_PFN<<PAGE_SHIFT))
start = MAX_DMA32_PFN<<PAGE_SHIFT;
else
start = MAX_DMA_PFN<<PAGE_SHIFT;
what if max_pfn_mapped is only a few pages larger than MAX_DMA32_PFN,
and that is smaller than the size it is trying to allocate?
I tried just removing the start and end adjustments in early_node_mem()
and the kernel booted fine under Xen, but it seemed to allocate at a
very low address. Should the for_each_active_range_index_in_nid() loop
in find_memory_core_early() be iterating from high to low addresses? If
the allocation could be relied on to be top-down, then you wouldn't need
to adjust start at all, and it would return the highest available memory
in a natural way.
Thanks,
J
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists