[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <975038b2-0c38-4937-9934-f81c082ff127@amd.com>
Date: Mon, 26 May 2025 10:44:51 +0530
From: Bharata B Rao <bharata@....com>
To: David Hildenbrand <david@...hat.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Cc: Jonathan.Cameron@...wei.com, dave.hansen@...el.com, gourry@...rry.net,
hannes@...xchg.org, mgorman@...hsingularity.net, mingo@...hat.com,
peterz@...radead.org, raghavendra.kt@....com, riel@...riel.com,
rientjes@...gle.com, sj@...nel.org, weixugc@...gle.com, willy@...radead.org,
ying.huang@...ux.alibaba.com, ziy@...dia.com, dave@...olabs.net,
nifan.cxl@...il.com, joshua.hahnjy@...il.com, xuezhengchu@...wei.com,
yiannis@...corp.com, akpm@...ux-foundation.org
Subject: Re: [RFC PATCH v0 2/2] mm: sched: Batch-migrate misplaced pages
On 22-May-25 9:41 PM, David Hildenbrand wrote:
> On 21.05.25 10:02, Bharata B Rao wrote:
>> Currently the folios identified as misplaced by the NUMA
>> balancing sub-system are migrated one by one from the NUMA
>> hint fault handler as and when they are identified as
>> misplaced.
>>
>> Instead of such singe folio migrations, batch them and
>> migrate them at once.
>>
>> Identified misplaced folios are isolated and stored in
>> a per-task list. A new task_work is queued from task tick
>> handler to migrate them in batches. Migration is done
>> periodically or if pending number of isolated foios exceeds
>> a threshold.
>
> That means that these pages are effectively unmovable for other purposes
> (CMA, compaction, long-term pinning, whatever) until that list was drained.
>
> Bad.
During the last week's MM alignment call on this subject, it was decided
not to isolate and batch from fault context and migrate from task_work
context like this.
However since the amount of time the folios stay in isolated state was
bounded both by time(1s in my patchset) and the number of folios
isolated, I thought it should be okay, but I may be wrong.
Regards,
Bharata.
Powered by blists - more mailing lists