[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aC3iDR8mB0uFWzAT@li-2b55cdcc-350b-11b2-a85c-a78bff51fc11.ibm.com>
Date: Wed, 21 May 2025 16:24:13 +0200
From: Sumanth Korikkar <sumanthk@...ux.ibm.com>
To: David Hildenbrand <david@...hat.com>
Cc: linux-mm <linux-mm@...ck.org>, Andrew Morton <akpm@...ux-foundation.org>,
Oscar Salvador <osalvador@...e.de>,
Gerald Schaefer <gerald.schaefer@...ux.ibm.com>,
Heiko Carstens <hca@...ux.ibm.com>, Vasily Gorbik <gor@...ux.ibm.com>,
Alexander Gordeev <agordeev@...ux.ibm.com>,
linux-s390 <linux-s390@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH 1/4] mm/memory_hotplug: Add interface for runtime
(de)configuration of memory
> So, the same as /sys/devices/system/memory/block_size_bytes ?
>
> In a future where we could have variable sized memory blocks, what would be
> the granularity here?
I wasnt aware of this variable sized memory blocks. Then either
introduce block_size_bytes attribute inside each memoryX/ directory ? or
add it only when variable sized memory blocks support is implemented?
> I assume, because that is assumed to be the smallest granularity in which we
> can add_memory().
>
> And the memory block size is currently always at least the storage increment
> size, correct?
>
> >
> > As I understand it, add_memory() operates on memory block granularity,
> > and this is enforced by check_hotplug_memory_range(), which ensures the
> > requested range aligns with the memory block size.
>
> Yes. I was rather wondering, if we could have storage increment size >
> memory block size.
I tried the following:
* Config1 (zvm, 8GB online + 4GB standby)
vmcp q v store
STORAGE = 8320M MAX = 2T INC = 16M STANDBY = 3968M RESERVED = 0
the increment size is 16MB in this case and block size is 128MB.
* Config2 (zvm, 512M online + 512M standby)
vmcp q v storage
STORAGE = 512M MAX = 2T INC = 1M STANDBY = 512M RESERVED = 0
But, memory_block_size_bytes() would return max(increment_size,
MIN_MEMORY_BLOCK_SIZE)
In both cases, therefore, memory block size will be 128MB.
On the other hand, I checked one of the lpars,
the increment size is 2GB, which is greater than MIN_MEMORY_BLOCK_SIZE.
Hence, memory block size is 2GB here.
> > I was wondering about the following practical scenario:
> >
> > When online memory is nearly full, the user can add a standby memory
> > block with memmap_on_memory enabled. This allows the system to avoid
> > consuming already scarce online memory for metadata.
>
> Right, that's the use case I mentioned. But we're talking about ~ 2/4 MiB on
> s390x for a single memory block. There are other things we have to allocate
> memory for when onlining memory, so there is no guarantee that it would work
> with memmap_on_memory either.
>
> It makes it more likely to succeed :)
You're right, I wasn't precise.
> > After enabling and bringing that standby memory online, the user now
> > has enough free online memory to add additional memory blocks without
> > memmap_on_memory. These later blocks can provide physically contiguous
> > memory, which is important for workloads or devices requiring continuous
> > physical address space.
> >
> > If my interpretation is correct, I see good potential for this be be
> > useful.
>
> Again, I think only in the case where we don't have have 2/4 MiB for the
> memmap.
I think, it is not 2/4Mib in every usecase.
On my LPAR, the increment size is 2GB. This means 32MB struct pages
metadata - per memory block.
> > As you pointed out, how about having something similar to
> > 73954d379efd ("dax: add a sysfs knob to control memmap_on_memory behavior")
>
> Right. But here, the use case is usually (a) to add a gigantic amount of
> memory using add_memory(), not small blocks like on s390x (b) consume the
> memmap from (slow) special-purpose memory as well.
>
> Regarding (a), the memmap could be so big that add_memory() might never
> really work (not just because of some temporary low-memory situation).
Sorry, I didnt understand it correctly.
regarding a): If add_memory() is performed with memmap_on_memory, altmap
metadata should fit into that added memory right?
> > 1) To configure/deconfigure a memory block
> > /sys/firmware/memory/memoryX/config
> >
> > 1 -> configure
> > 0 -> deconfigure
> >
> > 2) Determine whether memory block should have memmap_on_memory or not.
> > /sys/firmware/memory/memoryX/memmap_on_memory
> > 1 -> with altmap
> > 0 -> without altmap
> >
> > This attribute must be set before the memoryX is configured. Or else, it
> > will default to CONFIG_MHP_MEMMAP_ON_MEMORY / memmap_on_memory parameter.
>
> I don't have anything against that option. Just a thought if we really have
> to introduce this right now.
If there are no objections on this design, I'm happy to start exploring
it further. Thank you
Powered by blists - more mailing lists