[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1efcb368-fcdf-4bdd-8b94-a705b7806bc2@redhat.com>
Date: Wed, 8 Oct 2025 10:02:26 +0200
From: David Hildenbrand <david@...hat.com>
To: Sumanth Korikkar <sumanthk@...ux.ibm.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-mm <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>, linux-s390
<linux-s390@...r.kernel.org>, Gerald Schaefer
<gerald.schaefer@...ux.ibm.com>, Heiko Carstens <hca@...ux.ibm.com>,
Vasily Gorbik <gor@...ux.ibm.com>, Alexander Gordeev <agordeev@...ux.ibm.com>
Subject: Re: [PATCH 0/4] Support dynamic (de)configuration of memory
On 08.10.25 08:05, Sumanth Korikkar wrote:
>> Care to share an example output? I only have a s390x VM with 2 CPUs and no
>> way to configure/deconfigure.
>
> lscpu -e
> CPU NODE DRAWER BOOK SOCKET CORE L1d:L1i:L2 ONLINE CONFIGURED POLARIZATION ADDRESS
> 0 0 0 0 0 0 0:0:0 yes yes vert-medium 0
> 1 0 0 0 0 0 1:1:1 yes yes vert-medium 1
> 2 0 0 0 0 1 2:2:2 yes yes vert-low 2
> 3 0 0 0 0 1 3:3:3 yes yes vert-low 3
>
> # chcpu -d 2-3
> CPU 2 disabled
> CPU 3 disabled
> # chcpu -g 2
> CPU 2 deconfigured
> # chcpu -c 2
> CPU 2 configured
> # chcpu -e 2-3
> CPU 2 enabled
> CPU 3 enabled
Makes sense, thanks!
>
>>> chmem changes would look like:
>>> chmem -c 128M -m 1 : configure memory with memmap-on-memory enabled
>>> chmem -g 128M : deconfigure memory
>>
>> I wonder if the above two are really required. I would expect most/all users
>> to simply keep using -e / -d.
>>
>> Sure, there might be some corner cases, but I would assume most people to
>> not want to care about memmap-on-memory with the new model.
>
> I believe this remains very beneficial for customers in the following
> scenario:
>
> 1) Initial memory layout:
> 4 GB configured online
> 512 GB standby
>
> If memory_hotplug.memmap_on_memory=Y is set in the kernel command line:
> Suppose user requires more memory and onlines 256 GB. With memmap-on-memory
> enabled, this likely succeeds by default.
>
> Later, the user needs 256 GB of contiguous physical memory across memory
> blocks. Then, the user can still configure those memory blocks with
> memmap-on-memory disabled and online it.
>
> 2) If the administrator forgets to configure
> memory_hotplug.memmap_on_memory=Y, the following steps can be taken:
> Rescue from OOM situations: configure with memmap-on-memory enabled, online it.
That's my point: I don't consider either very likely to be used by
actual admins.
I guess in (1) it really only is a problem with very big memory blocks.
Assuming a memory block is just 128 MiB (or even 1 GiB), you can
add+online them individually. Once you succeeded with the first one
(very likely), the other ones will follow.
Sure, if you are so low on memory that you cannot even a single memory
block, then memmap-on-memory makes sense.
But note that memmap-on-memory was added to handle hotplug of large
chunks of memory (large DIMM/NVDIMM, large CXL device) in one go,
without the chance to add+online individual memory blocks incrementally.
That's also the reason why I didn't care so far to implement
memmap-on-memory support for virito-mem: as we add+online individual
(small) emmory blocks, the implementation effort for supporting
memmap_on_memory was so far not warranted.
(it's a bit trickier for virtio-mem to implement :) )
--
Cheers
David / dhildenb
Powered by blists - more mailing lists