[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200821103431.13481-4-david@redhat.com>
Date: Fri, 21 Aug 2020 12:34:29 +0200
From: David Hildenbrand <david@...hat.com>
To: linux-kernel@...r.kernel.org
Cc: virtualization@...ts.linux-foundation.org, linux-mm@...ck.org,
linux-hyperv@...r.kernel.org, xen-devel@...ts.xenproject.org,
David Hildenbrand <david@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...e.com>,
Dan Williams <dan.j.williams@...el.com>,
"Michael S . Tsirkin" <mst@...hat.com>,
Jason Wang <jasowang@...hat.com>,
Pankaj Gupta <pankaj.gupta.linux@...il.com>,
Baoquan He <bhe@...hat.com>,
Wei Yang <richardw.yang@...ux.intel.com>
Subject: [PATCH v1 3/5] virtio-mem: try to merge system ram resources
virtio-mem adds memory in memory block granularity, to be able to
remove it in the same granularity again later, and to grow slowly on
demand. This, however, results in quite a lot of resources when
adding a lot of memory. Resources are effectively stored in a list-based
tree. Having a lot of resources not only wastes memory, it also makes
traversing that tree more expensive, and makes /proc/iomem explode in
size (e.g., requiring kexec-tools to manually merge resources later
when e.g., trying to create a kdump header).
Before this patch, we get (/proc/iomem) when hotplugging 2G via virtio-mem
on x86-64:
[...]
100000000-13fffffff : System RAM
140000000-33fffffff : virtio0
140000000-147ffffff : System RAM (virtio_mem)
148000000-14fffffff : System RAM (virtio_mem)
150000000-157ffffff : System RAM (virtio_mem)
158000000-15fffffff : System RAM (virtio_mem)
160000000-167ffffff : System RAM (virtio_mem)
168000000-16fffffff : System RAM (virtio_mem)
170000000-177ffffff : System RAM (virtio_mem)
178000000-17fffffff : System RAM (virtio_mem)
180000000-187ffffff : System RAM (virtio_mem)
188000000-18fffffff : System RAM (virtio_mem)
190000000-197ffffff : System RAM (virtio_mem)
198000000-19fffffff : System RAM (virtio_mem)
1a0000000-1a7ffffff : System RAM (virtio_mem)
1a8000000-1afffffff : System RAM (virtio_mem)
1b0000000-1b7ffffff : System RAM (virtio_mem)
1b8000000-1bfffffff : System RAM (virtio_mem)
3280000000-32ffffffff : PCI Bus 0000:00
With this patch, we get (/proc/iomem):
[...]
fffc0000-ffffffff : Reserved
100000000-13fffffff : System RAM
140000000-33fffffff : virtio0
140000000-1bfffffff : System RAM (virtio_mem)
3280000000-32ffffffff : PCI Bus 0000:00
Of course, with more hotplugged memory, it gets worse. When unplugging
memory blocks again, try_remove_memory() (via
offline_and_remove_memory()) will properly split the resource up again.
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Michal Hocko <mhocko@...e.com>
Cc: Dan Williams <dan.j.williams@...el.com>
Cc: Michael S. Tsirkin <mst@...hat.com>
Cc: Jason Wang <jasowang@...hat.com>
Cc: Pankaj Gupta <pankaj.gupta.linux@...il.com>
Cc: Baoquan He <bhe@...hat.com>
Cc: Wei Yang <richardw.yang@...ux.intel.com>
Signed-off-by: David Hildenbrand <david@...hat.com>
---
drivers/virtio/virtio_mem.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c
index 834b7c13ef3dc..3aae0f87073a8 100644
--- a/drivers/virtio/virtio_mem.c
+++ b/drivers/virtio/virtio_mem.c
@@ -407,6 +407,7 @@ static int virtio_mem_mb_add(struct virtio_mem *vm, unsigned long mb_id)
{
const uint64_t addr = virtio_mem_mb_id_to_phys(mb_id);
int nid = vm->nid;
+ int rc;
if (nid == NUMA_NO_NODE)
nid = memory_add_physaddr_to_nid(addr);
@@ -423,8 +424,17 @@ static int virtio_mem_mb_add(struct virtio_mem *vm, unsigned long mb_id)
}
dev_dbg(&vm->vdev->dev, "adding memory block: %lu\n", mb_id);
- return add_memory_driver_managed(nid, addr, memory_block_size_bytes(),
- vm->resource_name);
+ rc = add_memory_driver_managed(nid, addr, memory_block_size_bytes(),
+ vm->resource_name);
+ if (!rc) {
+ /*
+ * Try to reduce the number of system ram resources in our
+ * resource container. The memory removal path will properly
+ * split them up again.
+ */
+ merge_system_ram_resources(vm->parent_resource);
+ }
+ return rc;
}
/*
--
2.26.2
Powered by blists - more mailing lists