[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <75acdad4-f0f4-f9c6-8a5c-3df44d4882cf@linux.vnet.ibm.com>
Date: Mon, 1 Oct 2018 16:23:22 -0700
From: Tyrel Datwyler <tyreld@...ux.vnet.ibm.com>
To: Michal Hocko <mhocko@...nel.org>,
Michael Bringmann <mwb@...ux.vnet.ibm.com>
Cc: Thomas Falcon <tlfalcon@...ux.vnet.ibm.com>,
Kees Cook <keescook@...omium.org>,
Mathieu Malaterre <malat@...ian.org>,
linux-kernel@...r.kernel.org, Nicholas Piggin <npiggin@...il.com>,
Pavel Tatashin <pasha.tatashin@...cle.com>, linux-mm@...ck.org,
Mauricio Faria de Oliveira <mauricfo@...ux.vnet.ibm.com>,
Juliet Kim <minkim@...ibm.com>,
Thiago Jung Bauermann <bauerman@...ux.vnet.ibm.com>,
Nathan Fontenot <nfont@...ux.vnet.ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>,
YASUAKI ISHIMATSU <yasu.isimatu@...il.com>,
linuxppc-dev@...ts.ozlabs.org,
Dan Williams <dan.j.williams@...el.com>,
Oscar Salvador <osalvador@...e.de>
Subject: Re: [PATCH] migration/mm: Add WARN_ON to try_offline_node
On 10/01/2018 01:27 PM, Michal Hocko wrote:
> On Mon 01-10-18 13:56:25, Michael Bringmann wrote:
>> In some LPAR migration scenarios, device-tree modifications are
>> made to the affinity of the memory in the system. For instance,
>> it may occur that memory is installed to nodes 0,3 on a source
>> system, and to nodes 0,2 on a target system. Node 2 may not
>> have been initialized/allocated on the target system.
>>
>> After migration, if a RTAS PRRN memory remove is made to a
>> memory block that was in node 3 on the source system, then
>> try_offline_node tries to remove it from node 2 on the target.
>> The NODE_DATA(2) block would not be initialized on the target,
>> and there is no validation check in the current code to prevent
>> the use of a NULL pointer.
>
> I am not familiar with ppc and the above doesn't really help me
> much. Sorry about that. But from the above it is not clear to me whether
> it is the caller which does something unexpected or the hotplug code
> being not robust enough. From your changelog I would suggest the later
> but why don't we see the same problem for other archs? Is this a problem
> of unrolling a partial failure?
>
> dlpar_remove_lmb does the following
>
> nid = memory_add_physaddr_to_nid(lmb->base_addr);
>
> remove_memory(nid, lmb->base_addr, block_sz);
>
> /* Update memory regions for memory remove */
> memblock_remove(lmb->base_addr, block_sz);
>
> dlpar_remove_device_tree_lmb(lmb);
>
> Is the whole operation correct when remove_memory simply backs off
> silently. Why don't we have to care about memblock resp
> dlpar_remove_device_tree_lmb parts? In other words how come the physical
> memory range is valid while the node association is not?
>
I guess with respect to my previous reply that patch in conjunction with this patch set as well?
https://lore.kernel.org/linuxppc-dev/20181001125846.2676.89826.stgit@ltcalpine2-lp9.aus.stglabs.ibm.com/T/#t
-Tyrel
Powered by blists - more mailing lists