[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250616133931.206626-1-bharata@amd.com>
Date: Mon, 16 Jun 2025 19:09:27 +0530
From: Bharata B Rao <bharata@....com>
To: <linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>
CC: <Jonathan.Cameron@...wei.com>, <dave.hansen@...el.com>,
<gourry@...rry.net>, <hannes@...xchg.org>, <mgorman@...hsingularity.net>,
<mingo@...hat.com>, <peterz@...radead.org>, <raghavendra.kt@....com>,
<riel@...riel.com>, <rientjes@...gle.com>, <sj@...nel.org>,
<weixugc@...gle.com>, <willy@...radead.org>, <ying.huang@...ux.alibaba.com>,
<ziy@...dia.com>, <dave@...olabs.net>, <nifan.cxl@...il.com>,
<xuezhengchu@...wei.com>, <yiannis@...corp.com>, <akpm@...ux-foundation.org>,
<david@...hat.com>, <bharata@....com>
Subject: [RFC PATCH v1 0/4] Kernel thread based async batch migration
Hi,
This is a continuation of the earlier post[1] that attempted to
convert migrations from NUMA Balancing to be async and batched.
In this version, per-node kernel threads are created to handle
migrations in an async manner.
This adds a few fields to the extended page flags that can be
used both by the sub-systems that request migrations and kmigrated
which migrates the pages. Some of the fields are potentially defined
to be used by kpromoted-like subsystem to manage hot page metrics,
but are unused right now.
Currently only NUMA Balancing is changed to make use of the async
batched migration. It does so by recording the target NID and the
readiness of the page to be migrated in the extended page flags
fields.
Each kmigrated routinely scans its PFNs, identifies the pages
marked for migration and batch-migrates them. Unlike the previous
approach, the responsibility of isolating the pages is now with
kmigrated.
The major difference between this approach and the way kpromoted[2]
tracked hot pages is the elimination of heavy synchronization points
between producers(sub-systems that request migrations or report
a hot page) and the consumer (kmigrated or kpromoted).
Instead of tracking only the list of hot pages in an orthogonal
manner, this approach ties the hot page or migration infomation
to the struct page.
TODOs:
- Very lightly tested(only with NUMAB=1) and posted to get some
feedback on the overall approach.
- Currently uses the flags field from page extension sub-system.
However need to check if it is preferrable to use/allocate a
separate 32bit field exclusively for this purpose within page
extension sub-system or outside of it.
- Benefit of async batch migration still needs to be measured.
- Need to really tune a few things like the number of pages to
batch, the aggressiveness of kthread, the kthread sleep interval etc.
- The logic to skip scanning of zones that don't have any pages
marked for migration needs to be added.
- No separate kernel config is defined currently and dependency
on PAGE_EXTENSION isn't cleanly laid out. Some added definitions
currently sit in page_ext.h which may not be an ideal location
for them.
[1] v0 - https://lore.kernel.org/linux-mm/20250521080238.209678-3-bharata@amd.com/
[2] kpromoted patchset - https://lore.kernel.org/linux-mm/20250306054532.221138-1-bharata@amd.com/
Bharata B Rao (3):
mm: migrate: Allow misplaced migration without VMA too
mm: kmigrated - Async kernel migration thread
mm: sched: Batch-migrate misplaced pages
Gregory Price (1):
migrate: implement migrate_misplaced_folios_batch
include/linux/migrate.h | 6 ++
include/linux/mmzone.h | 5 +
include/linux/page_ext.h | 17 +++
mm/Makefile | 3 +-
mm/kmigrated.c | 223 +++++++++++++++++++++++++++++++++++++++
mm/memory.c | 30 +-----
mm/migrate.c | 36 ++++++-
mm/mm_init.c | 6 ++
mm/page_ext.c | 11 ++
9 files changed, 309 insertions(+), 28 deletions(-)
create mode 100644 mm/kmigrated.c
--
2.34.1
Powered by blists - more mailing lists