[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <831b9b12-08fe-f5dc-f21d-83284b0aee8a@redhat.com>
Date: Mon, 9 Oct 2023 17:04:05 +0200
From: David Hildenbrand <david@...hat.com>
To: "Huang, Ying" <ying.huang@...el.com>,
Vishal Verma <vishal.l.verma@...el.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Oscar Salvador <osalvador@...e.de>,
Dan Williams <dan.j.williams@...el.com>,
Dave Jiang <dave.jiang@...el.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
nvdimm@...ts.linux.dev, linux-cxl@...r.kernel.org,
Dave Hansen <dave.hansen@...ux.intel.com>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>,
Michal Hocko <mhocko@...e.com>,
Jonathan Cameron <Jonathan.Cameron@...wei.com>,
Jeff Moyer <jmoyer@...hat.com>
Subject: Re: [PATCH v5 1/2] mm/memory_hotplug: split memmap_on_memory requests
across memblocks
On 07.10.23 10:55, Huang, Ying wrote:
> Vishal Verma <vishal.l.verma@...el.com> writes:
>
>> The MHP_MEMMAP_ON_MEMORY flag for hotplugged memory is restricted to
>> 'memblock_size' chunks of memory being added. Adding a larger span of
>> memory precludes memmap_on_memory semantics.
>>
>> For users of hotplug such as kmem, large amounts of memory might get
>> added from the CXL subsystem. In some cases, this amount may exceed the
>> available 'main memory' to store the memmap for the memory being added.
>> In this case, it is useful to have a way to place the memmap on the
>> memory being added, even if it means splitting the addition into
>> memblock-sized chunks.
>>
>> Change add_memory_resource() to loop over memblock-sized chunks of
>> memory if caller requested memmap_on_memory, and if other conditions for
>> it are met. Teach try_remove_memory() to also expect that a memory
>> range being removed might have been split up into memblock sized chunks,
>> and to loop through those as needed.
>>
>> Cc: Andrew Morton <akpm@...ux-foundation.org>
>> Cc: David Hildenbrand <david@...hat.com>
>> Cc: Michal Hocko <mhocko@...e.com>
>> Cc: Oscar Salvador <osalvador@...e.de>
>> Cc: Dan Williams <dan.j.williams@...el.com>
>> Cc: Dave Jiang <dave.jiang@...el.com>
>> Cc: Dave Hansen <dave.hansen@...ux.intel.com>
>> Cc: Huang Ying <ying.huang@...el.com>
>> Suggested-by: David Hildenbrand <david@...hat.com>
>> Signed-off-by: Vishal Verma <vishal.l.verma@...el.com>
>> ---
>> mm/memory_hotplug.c | 162 ++++++++++++++++++++++++++++++++--------------------
>> 1 file changed, 99 insertions(+), 63 deletions(-)
>>
>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>> index f8d3e7427e32..77ec6f15f943 100644
>> --- a/mm/memory_hotplug.c
>> +++ b/mm/memory_hotplug.c
>> @@ -1380,6 +1380,44 @@ static bool mhp_supports_memmap_on_memory(unsigned long size)
>> return arch_supports_memmap_on_memory(vmemmap_size);
>> }
>>
>> +static int add_memory_create_devices(int nid, struct memory_group *group,
>> + u64 start, u64 size, mhp_t mhp_flags)
>> +{
>> + struct mhp_params params = { .pgprot = pgprot_mhp(PAGE_KERNEL) };
>> + struct vmem_altmap mhp_altmap = {
>> + .base_pfn = PHYS_PFN(start),
>> + .end_pfn = PHYS_PFN(start + size - 1),
>> + };
>> + int ret;
>> +
>> + if ((mhp_flags & MHP_MEMMAP_ON_MEMORY)) {
>> + mhp_altmap.free = memory_block_memmap_on_memory_pages();
>> + params.altmap = kmalloc(sizeof(struct vmem_altmap), GFP_KERNEL);
>> + if (!params.altmap)
>> + return -ENOMEM;
>> +
>> + memcpy(params.altmap, &mhp_altmap, sizeof(mhp_altmap));
>> + }
>> +
>> + /* call arch's memory hotadd */
>> + ret = arch_add_memory(nid, start, size, ¶ms);
>> + if (ret < 0)
>> + goto error;
>> +
>> + /* create memory block devices after memory was added */
>> + ret = create_memory_block_devices(start, size, params.altmap, group);
>> + if (ret)
>> + goto err_bdev;
>> +
>> + return 0;
>> +
>> +err_bdev:
>> + arch_remove_memory(start, size, NULL);
>> +error:
>> + kfree(params.altmap);
>> + return ret;
>> +}
>> +
>> /*
>> * NOTE: The caller must call lock_device_hotplug() to serialize hotplug
>> * and online/offline operations (triggered e.g. by sysfs).
>> @@ -1388,14 +1426,10 @@ static bool mhp_supports_memmap_on_memory(unsigned long size)
>> */
>> int __ref add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags)
>> {
>> - struct mhp_params params = { .pgprot = pgprot_mhp(PAGE_KERNEL) };
>> + unsigned long memblock_size = memory_block_size_bytes();
>> enum memblock_flags memblock_flags = MEMBLOCK_NONE;
>> - struct vmem_altmap mhp_altmap = {
>> - .base_pfn = PHYS_PFN(res->start),
>> - .end_pfn = PHYS_PFN(res->end),
>> - };
>> struct memory_group *group = NULL;
>> - u64 start, size;
>> + u64 start, size, cur_start;
>> bool new_node = false;
>> int ret;
>>
>> @@ -1436,28 +1470,21 @@ int __ref add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags)
>> /*
>> * Self hosted memmap array
>> */
>> - if (mhp_flags & MHP_MEMMAP_ON_MEMORY) {
>> - if (mhp_supports_memmap_on_memory(size)) {
>> - mhp_altmap.free = memory_block_memmap_on_memory_pages();
>> - params.altmap = kmalloc(sizeof(struct vmem_altmap), GFP_KERNEL);
>> - if (!params.altmap)
>> + if ((mhp_flags & MHP_MEMMAP_ON_MEMORY) &&
>> + mhp_supports_memmap_on_memory(memblock_size)) {
>> + for (cur_start = start; cur_start < start + size;
>> + cur_start += memblock_size) {
>> + ret = add_memory_create_devices(nid, group, cur_start,
>> + memblock_size,
>> + mhp_flags);
>> + if (ret)
>> goto error;
>> -
>> - memcpy(params.altmap, &mhp_altmap, sizeof(mhp_altmap));
>> }
>> - /* fallback to not using altmap */
>> - }
>> -
>> - /* call arch's memory hotadd */
>> - ret = arch_add_memory(nid, start, size, ¶ms);
>> - if (ret < 0)
>> - goto error_free;
>> -
>> - /* create memory block devices after memory was added */
>> - ret = create_memory_block_devices(start, size, params.altmap, group);
>> - if (ret) {
>> - arch_remove_memory(start, size, NULL);
>> - goto error_free;
>> + } else {
>> + ret = add_memory_create_devices(nid, group, start, size,
>> + mhp_flags);
>> + if (ret)
>> + goto error;
>> }
>>
>> if (new_node) {
>> @@ -1494,8 +1521,6 @@ int __ref add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags)
>> walk_memory_blocks(start, size, NULL, online_memory_block);
>>
>> return ret;
>> -error_free:
>> - kfree(params.altmap);
>> error:
>> if (IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK))
>> memblock_remove(start, size);
>> @@ -2146,12 +2171,41 @@ void try_offline_node(int nid)
>> }
>> EXPORT_SYMBOL(try_offline_node);
>>
>> -static int __ref try_remove_memory(u64 start, u64 size)
>> +static void __ref remove_memory_block_and_altmap(int nid, u64 start, u64 size)
>> {
>> + int rc = 0;
>> struct memory_block *mem;
>> - int rc = 0, nid = NUMA_NO_NODE;
>> struct vmem_altmap *altmap = NULL;
>>
>> + rc = walk_memory_blocks(start, size, &mem, test_has_altmap_cb);
>> + if (rc) {
>> + altmap = mem->altmap;
>> + /*
>> + * Mark altmap NULL so that we can add a debug
>> + * check on memblock free.
>> + */
>> + mem->altmap = NULL;
>> + }
>> +
>> + /*
>> + * Memory block device removal under the device_hotplug_lock is
>> + * a barrier against racing online attempts.
>> + */
>> + remove_memory_block_devices(start, size);
>> +
>> + arch_remove_memory(start, size, altmap);
>> +
>> + /* Verify that all vmemmap pages have actually been freed. */
>> + if (altmap) {
>> + WARN(altmap->alloc, "Altmap not fully unmapped");
>> + kfree(altmap);
>> + }
>> +}
>> +
>> +static int __ref try_remove_memory(u64 start, u64 size)
>> +{
>> + int rc, nid = NUMA_NO_NODE;
>> +
>> BUG_ON(check_hotplug_memory_range(start, size));
>>
>> /*
>> @@ -2167,47 +2221,28 @@ static int __ref try_remove_memory(u64 start, u64 size)
>> if (rc)
>> return rc;
>>
>> + mem_hotplug_begin();
>> +
>> /*
>> - * We only support removing memory added with MHP_MEMMAP_ON_MEMORY in
>> - * the same granularity it was added - a single memory block.
>> + * For memmap_on_memory, the altmaps could have been added on
>> + * a per-memblock basis. Loop through the entire range if so,
>> + * and remove each memblock and its altmap.
>> */
>> if (mhp_memmap_on_memory()) {
>
> IIUC, even if mhp_memmap_on_memory() returns true, it's still possible
> that the memmap is put in DRAM after [2/2]. So that,
> arch_remove_memory() are called for each memory block unnecessarily. Can
> we detect this (via altmap?) and call remove_memory_block_and_altmap()
> for the whole range?
Good point. We should handle memblock-per-memblock onny if we have to
handle the altmap. Otherwise, just call a separate function that doesn't
care about -- e.g., called remove_memory_blocks_no_altmap().
We could simply walk all memory blocks and make sure either all have an
altmap or none has an altmap. If there is a mix, we should bail out with
WARN_ON_ONCE().
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists