[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51886409.9030203@cn.fujitsu.com>
Date: Tue, 07 May 2013 10:16:41 +0800
From: Tang Chen <tangchen@...fujitsu.com>
To: Vasilis Liaskovitis <vasilis.liaskovitis@...fitbricks.com>
CC: mingo@...hat.com, hpa@...or.com, akpm@...ux-foundation.org,
yinghai@...nel.org, jiang.liu@...wei.com, wency@...fujitsu.com,
isimatu.yasuaki@...fujitsu.com, tj@...nel.org,
laijs@...fujitsu.com, davem@...emloft.net, mgorman@...e.de,
minchan@...nel.org, mina86@...a86.com, x86@...nel.org,
linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH v2 10/13] x86, acpi, numa, mem-hotplug: Introduce MEMBLK_HOTPLUGGABLE
to mark and reserve hotpluggable memory.
Hi Vasilis,
On 05/06/2013 06:37 PM, Vasilis Liaskovitis wrote:
>
> you can use qemu-kvm and seabios from these branches:
> https://github.com/vliaskov/qemu-kvm/commits/memhp-v4
> https://github.com/vliaskov/seabios/commits/memhp-v4
>
> Instructions on how to use the DIMM/memory hotplug are here:
>
> http://lists.gnu.org/archive/html/qemu-devel/2012-12/msg02693.html
> (these patchsets are not in mainline qemu/qemu-kvm and seabios)
>
> e.g. the following creates a VM with 2G initial memory on 2 nodes (1GB on each).
> There is also an extra 1GB DIMM on each node (the last 3 lines below describe
> this):
>
> /opt/qemu/bin/qemu-system-x86_64 -bios /opt/devel/seabios-upstream/out/bios.bin \
> -enable-kvm -M pc -smp 4,maxcpus=8 -cpu host -m 2G \
> -drive
> file=/opt/images/debian.img,if=none,id=drive-virtio-disk0,format=raw,cache=none \
> -device virtio-blk-pci,bus=pci.0,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 \
> -netdev type=tap,id=guest0,vhost=on -device virtio-net-pci,netdev=guest0 -vga \
> std -monitor stdio \
> -numa node,mem=1G,cpus=2,nodeid=0 -numa node,mem=0,cpus=2,nodeid=1 \
> -device dimm,id=dimm0,size=1G,node=0,bus=membus.0,populated=off \
> -device dimm,id=dimm1,size=1G,node=1,bus=membus.0,populated=off
>
> After startup I hotplug the dimm0 on node0 (or dimm1 on node1, same result)
> (qemu) device_add dimm,id=dimm0,size=1G,node=0,bus=membus.0
>
> than i reboot VM. Kernel works without "movablecore=acpi" but panics with this
> option.
>
> Note this qemu/seabios does not model initial memory (-m 2G) as memory devices.
> Only extra dimms ("device -dimm") are modeled as separate memory devices.
>
OK, I'll try it. Thank you for telling me this.:)
>>
>> Now in kernel, we can recognize a node (by PXM in SRAT), but we cannot
>> recognize a memory device. Are you saying if we have this
>> entry-granularity,
>> we can hotplug a single memory device in a node ? (Perhaps there are more
>> than on memory device in a node.)
>
> yes, this is what I mean. Multiple memory devices on one node is possible in
> both a real machine and a VM.
> In the VM case, seabios can present different DIMM devices for any number of
> nodes. Each DIMM is also given a separate SRAT entry by seabios. So when the
> kernel initially parses the entries, it sees multiple ones for the same node.
> (these are merged together in numa_cleanup_meminfo though)
>
>>
>> If so, it makes sense. But I don't the kernel is able to recognize which
>> device a memory range belongs to now. And I'm not sure if we can do this.
>
> kernel knows which memory ranges belong to each DIMM (with ACPI enabled, each
> DIMM is represented by an acpi memory device, see drivers/acpi/acpi_memhotplug.c)
>
Oh, I'll check acpi_memhotplug.c and see what we can do.
And BTW, as Yinghai suggested, we'd better put pagetable in local node.
But the best
way is to put pagetable in the local memory device, I think. Otherwise,
we are not
able to hot-remove a memory device.
Thanks. :)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists