[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e9f3013a-bee2-159b-02ca-fc9546d525f2@redhat.com>
Date: Fri, 29 Mar 2019 09:51:29 +0100
From: David Hildenbrand <david@...hat.com>
To: Oscar Salvador <osalvador@...e.de>
Cc: akpm@...ux-foundation.org, mhocko@...e.com,
dan.j.williams@...el.com, Jonathan.Cameron@...wei.com,
anshuman.khandual@....com, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH 0/4] mm,memory_hotplug: allocate memmap from hotadded
memory
> Great, I would like to see how this works there :-).
>
>> I guess one important thing to mention is that it is no longer possible
>> to remove memory in a different granularity it was added. I slightly
>> remember that ACPI code sometimes "reuses" parts of already added
>> memory. We would have to validate that this can indeed not be an issue.
>>
>> drivers/acpi/acpi_memhotplug.c:
>>
>> result = __add_memory(node, info->start_addr, info->length);
>> if (result && result != -EEXIST)
>> continue;
>>
>> What would happen when removing this dimm (->remove_memory())
>
> Yeah, I see the point.
> Well, we are safe here because the vmemmap data is being allocated in
> every call to __add_memory/add_memory/add_memory_resource.
>
> E.g:
>
> * Being memblock granularity 128M
>
> # object_add memory-backend-ram,id=ram0,size=256M
> # device_add pc-dimm,id=dimm0,memdev=ram0,node=1
So, this should result in one __add_memory() call with 256MB, creating
two memory block devices (128MB). I *assume* (haven't looked at the
details yet, sorry), that you will allocate vmmap for (and on!) each of
these two 128MB sections/memblocks, correct?
>
> I am not sure how ACPI gets to split the DIMM in memory resources
> (aka mem_device->res_list), but it does not really matter.
> For each mem_device->res_list item, we will make a call to __add_memory,
> which will allocate the vmemmap data for __that__ item, we do not care
> about the others.
>
> And when removing the DIMM, acpi_memory_remove_memory will make a call to
> __remove_memory() for each mem_device->res_list item, and that will take
> care of free up the vmemmap data.
Ah okay, that makes sense.
>
> Now, with all my tests, ACPI always considered a DIMM a single memory resource,
> but maybe under different circumstances it gets to split it in different mem
> resources.
> But it does not really matter, as vmemmap data is being created and isolated in
> every call to __add_memory.
Yes, as long as the calls to add_memory matches remove_memory, we are
totally fine. I am wondering if that could not be the case. A simplified
example:
A DIMM overlaps with some other system ram, as detected and added during
boot. When detecting the dimm, __add_memory() returns -EEXIST.
Now, wehn unplugging the dimm, we call remove_memory(), but only remove
the DIMM part. I wonder how/if something like that can happen and how
the system would react.
I guess I'll have to do some more ACPI code reading to find out how this
-EEXIST case can come to life.
>
>> Also have a look at
>>
>> arch/powerpc/platforms/powernv/memtrace.c
>>
>> I consider it evil code. It will simply try to offline+unplug *some*
>> memory it finds in *some granularity*. Not sure if this might be
>> problematic-
>
> Heh, memtrace from powerpc ^^, I saw some oddities coming from there, but
> with my code though because I did not get to test that in concret.
> But I am interested to see if it can trigger something, so I will be testing
> that the next days.
>
>> Would there be any "safety net" for adding/removing memory in different
>> granularities?
>
> Uhm, I do not think we need it, or at least I cannot think of a case where this
> could cause trouble with the current design.
> Can you think of any?
Nope, as long as it works (especially no change to what we had before),
no safety net needed :)
I was just curious if
add_memory() followed by remove_memory() used to work before and if you
patches might change that behavior.
Thanks! Will try to look into the details soon!
>
> Thanks David ;-)
>
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists