[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <50D185CB.3020200@cn.fujitsu.com>
Date: Wed, 19 Dec 2012 17:15:55 +0800
From: Tang Chen <tangchen@...fujitsu.com>
To: Tang Chen <tangchen@...fujitsu.com>
CC: jiang.liu@...wei.com, wujianguo@...wei.com, hpa@...or.com,
akpm@...ux-foundation.org, wency@...fujitsu.com,
laijs@...fujitsu.com, linfeng@...fujitsu.com, yinghai@...nel.org,
isimatu.yasuaki@...fujitsu.com, rob@...dley.net,
kosaki.motohiro@...fujitsu.com, minchan.kim@...il.com,
mgorman@...e.de, rientjes@...gle.com, guz.fnst@...fujitsu.com,
rusty@...tcorp.com.au, lliubbo@...il.com, jaegeuk.hanse@...il.com,
tony.luck@...el.com, glommer@...allels.com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH v4 3/6] ACPI: Restructure movablecore_map with memory
info from SRAT.
On 12/19/2012 04:15 PM, Tang Chen wrote:
> The Hot Plugable bit in SRAT flags specifys if the memory range
> could be hotplugged.
>
> If user specified movablecore_map=nn[KMG]@ss[KMG], reset
> movablecore_map.map to the intersection of hotpluggable ranges from
> SRAT and old movablecore_map.map.
> Else if user specified movablecore_map=acpi, just use the hotpluggable
> ranges from SRAT.
> Otherwise, do nothing. The kernel will use all the memory in all nodes
> evenly.
>
> The idea "getting info from SRAT" was from Liu Jiang<jiang.liu@...wei.com>.
> And the idea "do more limit for memblock" was from Wu Jianguo<wujianguo@...wei.com>
>
> Signed-off-by: Tang Chen<tangchen@...fujitsu.com>
> Tested-by: Gu Zheng<guz.fnst@...fujitsu.com>
> ---
> arch/x86/mm/srat.c | 38 +++++++++++++++++++++++++++++++++++---
> 1 files changed, 35 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/mm/srat.c b/arch/x86/mm/srat.c
> index 4ddf497..947a2b5 100644
> --- a/arch/x86/mm/srat.c
> +++ b/arch/x86/mm/srat.c
> @@ -146,7 +146,12 @@ int __init
> acpi_numa_memory_affinity_init(struct acpi_srat_mem_affinity *ma)
> {
> u64 start, end;
> + u32 hotpluggable;
> int node, pxm;
> +#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
> + int overlap;
> + unsigned long start_pfn, end_pfn;
> +#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */
>
> if (srat_disabled())
> return -1;
> @@ -157,8 +162,10 @@ acpi_numa_memory_affinity_init(struct acpi_srat_mem_affinity *ma)
> if ((ma->flags& ACPI_SRAT_MEM_ENABLED) == 0)
> return -1;
>
> - if ((ma->flags& ACPI_SRAT_MEM_HOT_PLUGGABLE)&& !save_add_info())
> + hotpluggable = ma->flags& ACPI_SRAT_MEM_HOT_PLUGGABLE;
> + if (hotpluggable&& !save_add_info())
> return -1;
> +
> start = ma->base_address;
> end = start + ma->length;
> pxm = ma->proximity_domain;
> @@ -178,9 +185,34 @@ acpi_numa_memory_affinity_init(struct acpi_srat_mem_affinity *ma)
>
> node_set(node, numa_nodes_parsed);
>
> - printk(KERN_INFO "SRAT: Node %u PXM %u [mem %#010Lx-%#010Lx]\n",
> + printk(KERN_INFO "SRAT: Node %u PXM %u [mem %#010Lx-%#010Lx] %s\n",
> node, pxm,
> - (unsigned long long) start, (unsigned long long) end - 1);
> + (unsigned long long) start, (unsigned long long) end - 1,
> + hotpluggable ? "Hot Pluggable": "");
> +
> +#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
> + start_pfn = PFN_DOWN(start);
> + end_pfn = PFN_UP(end);
> +
> + if (!hotpluggable) {
> + /* Clear the range overlapped in movablecore_map.map */
> + remove_movablecore_map(start_pfn, end_pfn);
> + goto out;
> + }
> +
> + /* If not using SRAT, don't modify user configuration. */
> + if (!movablecore_map.acpi)
> + goto out;
Here, forgot to do some check. Please see the new resent one.
Thanks. :)
> +
> + /* If user chose to use SRAT info, insert the range anyway. */
> + if (insert_movablecore_map(start_pfn, end_pfn))
> + pr_err("movablecore_map: too many entries;"
> + " ignoring [mem %#010llx-%#010llx]\n",
> + (unsigned long long) start,
> + (unsigned long long) (end - 1));
> +
> +out:
> +#endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */
> return 0;
> }
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists