[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130812143910.GH15892@htj.dyndns.org>
Date: Mon, 12 Aug 2013 10:39:10 -0400
From: Tejun Heo <tj@...nel.org>
To: Tang Chen <tangchen@...fujitsu.com>
Cc: robert.moore@...el.com, lv.zheng@...el.com, rjw@...k.pl,
lenb@...nel.org, tglx@...utronix.de, mingo@...e.hu, hpa@...or.com,
akpm@...ux-foundation.org, trenn@...e.de, yinghai@...nel.org,
jiang.liu@...wei.com, wency@...fujitsu.com, laijs@...fujitsu.com,
isimatu.yasuaki@...fujitsu.com, izumi.taku@...fujitsu.com,
mgorman@...e.de, minchan@...nel.org, mina86@...a86.com,
gong.chen@...ux.intel.com, vasilis.liaskovitis@...fitbricks.com,
lwoodman@...hat.com, riel@...hat.com, jweiner@...hat.com,
prarit@...hat.com, zhangyanfei@...fujitsu.com,
yanghy@...fujitsu.com, x86@...nel.org, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-acpi@...r.kernel.org
Subject: Re: [PATCH part5 1/7] x86: get pg_data_t's memory from other node
Hello,
The subject is a bit misleading. Maybe it should say "allow getting
..." rather than "get ..."?
On Thu, Aug 08, 2013 at 06:16:13PM +0800, Tang Chen wrote:
....
> A node could have several memory devices. And the device who holds node
> data should be hot-removed in the last place. But in NUMA level, we don't
> know which memory_block (/sys/devices/system/node/nodeX/memoryXXX) belongs
> to which memory device. We only have node. So we can only do node hotplug.
>
> But in virtualization, developers are now developing memory hotplug in qemu,
> which support a single memory device hotplug. So a whole node hotplug will
> not satisfy virtualization users.
>
> So at last, we concluded that we'd better do memory hotplug and local node
> things (local node node data, pagetable, vmemmap, ...) in two steps.
> Please refer to https://lkml.org/lkml/2013/6/19/73
I suppose the above three paragraphs are trying to say
* A hotpluggable NUMA node may be composed of multiple memory devices
which individually are hot-pluggable.
* pg_data_t and page tables the serving a NUMA node may be located in
the same node they're serving; however, if the node is composed of
multiple hotpluggable memory devices, the device containing them
should be the last one to be removed.
* For physical memory hotplug, whole NUMA node hotunplugging is fine;
however, in virtualizied environments, finer grained hotunplugging
is desirable; unfortunately, there currently is no way to which
specific memory device pg_data_t and page tables are allocated
inside making it impossible to order unpluggings of memory devices
of a NUMA node. To avoid the ordering problem while allowing
removal of subset fo a NUMA node, it has been decided that pg_data_t
and page tables should be allocated on a different non-hotpluggable
NUMA node.
Am I following it correctly? If so, can you please update the
description? It's quite confusing. Also, the decision seems rather
poorly made. It should be trivial to allocate memory for pg_data_t
and page tables in one end of the NUMA node and just record the
boundary to distinguish between the area which can be removed any time
and the other which can only be removed as a unit as the last step.
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists