[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <50584159.3020403@cn.fujitsu.com>
Date: Tue, 18 Sep 2012 17:39:37 +0800
From: Wen Congyang <wency@...fujitsu.com>
To: Vasilis Liaskovitis <vasilis.liaskovitis@...fitbricks.com>
CC: Yasuaki Ishimatsu <isimatu.yasuaki@...fujitsu.com>,
Andrew Morton <akpm@...ux-foundation.org>, x86@...nel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org, linux-acpi@...r.kernel.org,
linux-s390@...r.kernel.org, linux-sh@...r.kernel.org,
linux-ia64@...r.kernel.org, cmetcalf@...era.com,
sparclinux@...r.kernel.org, rientjes@...gle.com, liuj97@...il.com,
len.brown@...el.com, benh@...nel.crashing.org, paulus@...ba.org,
cl@...ux.com, minchan.kim@...il.com, kosaki.motohiro@...fujitsu.com
Subject: Re: [RFC v8 PATCH 00/20] memory-hotplug: hot-remove physical memory
At 09/13/2012 01:18 AM, Vasilis Liaskovitis Wrote:
> Hi,
>
> On Wed, Sep 12, 2012 at 01:20:28PM +0800, Wen Congyang wrote:
>>>
>>> On Mon, Sep 10, 2012 at 10:01:44AM +0800, Wen Congyang wrote:
>>>> At 09/10/2012 09:46 AM, Yasuaki Ishimatsu Wrote:
>>>>> How do you test the patch? As Andrew says, for hot-removing memory,
>>>>> we need a particular hardware. I think so too. So many people may want
>>>>> to know how to test the patch.
>>>>> If we apply following patch to kvm guest, can we hot-remove memory on
>>>>> kvm guest?
>>>>>
>>>>> http://lists.gnu.org/archive/html/qemu-devel/2012-07/msg01389.html
>>>>
>>>> Yes, if we apply this patchset, we can test hot-remove memory on kvm guest.
>>>> But that patchset doesn't implement _PS3, so there is some restriction.
>>>
>>> the following repos contain the patchset above, plus 2 more patches that add
>>> PS3 support to the dimm devices in qemu/seabios:
>>>
>>> https://github.com/vliaskov/seabios/commits/memhp-v2
>>> https://github.com/vliaskov/qemu-kvm/commits/memhp-v2
>>>
>>> I have not posted the PS3 patches yet in the qemu list, but will post them
>>> soon for v3 of the memory hotplug series. If you have issues testing, let me
>>> know.
>>
>> Hmm, seabios doesn't support ACPI table SLIT. We can specify node it for dimm
>> device, so I think we should support SLIT in seabios. Otherwise we may meet
>> the following kernel messages:
>> [ 325.016769] init_memory_mapping: [mem 0x40000000-0x5fffffff]
>> [ 325.018060] [mem 0x40000000-0x5fffffff] page 2M
>> [ 325.019168] [ffffea0001000000-ffffea00011fffff] potential offnode page_structs
>> [ 325.024172] [ffffea0001200000-ffffea00013fffff] potential offnode page_structs
>> [ 325.028596] [ffffea0001400000-ffffea00017fffff] PMD -> [ffff880035000000-ffff8800353fffff] on node 1
>> [ 325.031775] [ffffea0001600000-ffffea00017fffff] potential offnode page_structs
>>
>> Do you have plan to do it?
> thanks for testing.
>
> commit 5294828 from https://github.com/vliaskov/seabios/commits/memhp-v2
> implements a SLIT table for the given numa nodes.
Hmm, why do you set node_distance(i, j) to REMOTE_DISTANCE if i != j?
>
> However I am not sure the SLIT is the problem. The kernel builds a default
> numa_distance table in arch/x86/mm/numa.c: numa_alloc_distance(). If the BIOS
> doesn't present a SLIT, this should take effect (numactl --hardware should
> report this table)
If the BIOS doesn't present a SLIT, numa_distance_cnt is set to 0 in the
function numa_reset_distance(). So node_distance(i, j) is REMOTE_DISTANCE(i != j).
>
> Do you have more details on how to reproduce the warning? e.g. how many dimms
> are present in the system? Does this happen on the first dimm hot-plugged?
> Are all SRAT entries parsed correctly at boot-time or do you see any other
> warnings at boot-time?
I can't reproduce it again. IIRC, I only do the following things:
hotplug a memory device, online the pages, offline the pages and hot remove
the memory device.
Thanks
Wen Congyang
>
> I 'll investigate a bit more and report back.
>
> thanks,
>
> - Vasilis
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists