[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aC9E4-0-RD-hWchr@gourry-fedora-PF4VCD3F>
Date: Thu, 22 May 2025 11:38:11 -0400
From: Gregory Price <gourry@...rry.net>
To: Bharata B Rao <bharata@....com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Jonathan.Cameron@...wei.com, dave.hansen@...el.com,
hannes@...xchg.org, mgorman@...hsingularity.net, mingo@...hat.com,
peterz@...radead.org, raghavendra.kt@....com, riel@...riel.com,
rientjes@...gle.com, sj@...nel.org, weixugc@...gle.com,
willy@...radead.org, ying.huang@...ux.alibaba.com, ziy@...dia.com,
dave@...olabs.net, nifan.cxl@...il.com, joshua.hahnjy@...il.com,
xuezhengchu@...wei.com, yiannis@...corp.com,
akpm@...ux-foundation.org, david@...hat.com
Subject: Re: [RFC PATCH v0 2/2] mm: sched: Batch-migrate misplaced pages
On Thu, May 22, 2025 at 01:03:35PM +0530, Bharata B Rao wrote:
> On 22-May-25 9:25 AM, Gregory Price wrote:
> >
> > So i think this, as presented, is a half-measure - and I don't think
> > it's a good half-measure. I think we might need to go all the way to a
> > set of per-cpu migration lists that a kernel work can pluck the head of
> > on some interval. That would bound the number of isolated folios to the
> > number of CPUs rather than the number of tasks.
>
> Why per-cpu and not per-node? All folios that are targeted for a node can be
> in that node's list.
>
On systems with significant number of threads (512-1024), these lists
may be highly contended. I suppose we can start with per-node, but I
would not be surprised if this went straight to per-cpu.
> I think if we are leaving the migration to be done by the migrator thread
> later, then isolating them beforehand may not be ideal. In such cases
> tracking the hot pages via PFNs like I did in kpromoted may be better.
>
This seems like not a bad idea, you could do hot-swapped buffers to
prevent infinite growth / contention. One of the problems with PFNs is
that the state of that page can change between candidacy and promotion.
I suppose the devil is the details there.
~Gregory
Powered by blists - more mailing lists