[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d55aa259-56c0-9601-ffce-997ea1fb3ac5@redhat.com>
Date: Wed, 3 Apr 2019 10:17:26 +0200
From: David Hildenbrand <david@...hat.com>
To: Michal Hocko <mhocko@...nel.org>,
Oscar Salvador <osalvador@...e.de>
Cc: akpm@...ux-foundation.org, dan.j.williams@...el.com,
Jonathan.Cameron@...wei.com, anshuman.khandual@....com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 0/4] mm,memory_hotplug: allocate memmap from hotadded
memory
On 03.04.19 10:12, Michal Hocko wrote:
> On Wed 03-04-19 10:01:16, Oscar Salvador wrote:
>> On Tue, Apr 02, 2019 at 02:48:45PM +0200, Michal Hocko wrote:
>>> So what is going to happen when you hotadd two memblocks. The first one
>>> holds memmaps and then you want to hotremove (not just offline) it?
>>
>> If you hot-add two memblocks, this means that either:
>>
>> a) you hot-add a 256MB-memory-device (128MB per memblock)
>> b) you hot-add two 128MB-memory-device
>>
>> Either way, hot-removing only works for memory-device as a whole, so
>> there is no problem.
>>
>> Vmemmaps are created per hot-added operations, this means that
>> vmemmaps will be created for the hot-added range.
>> And since hot-add/hot-remove operations works with the same granularity,
>> there is no problem.
>
> What does prevent calling somebody arch_add_memory for a range spanning
> multiple memblocks from a driver directly. In other words aren't you
To drivers, we only expose add_memory() and friends. And I think this is
a good idea.
> making assumptions about a future usage based on the qemu usecase?
>
As I noted, we only have an issue if add add_memory() and
remove_memory() is called with different granularity. I gave two
examples where this might not be the case, but we will have to look int
the details.
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists