lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250527185019.12457-1-sj@kernel.org>
Date: Tue, 27 May 2025 11:50:19 -0700
From: SeongJae Park <sj@...nel.org>
To: Bharata B Rao <bharata@....com>
Cc: SeongJae Park <sj@...nel.org>,
	linux-kernel@...r.kernel.org,
	linux-mm@...ck.org,
	Jonathan.Cameron@...wei.com,
	dave.hansen@...el.com,
	gourry@...rry.net,
	hannes@...xchg.org,
	mgorman@...hsingularity.net,
	mingo@...hat.com,
	peterz@...radead.org,
	raghavendra.kt@....com,
	riel@...riel.com,
	rientjes@...gle.com,
	weixugc@...gle.com,
	willy@...radead.org,
	ying.huang@...ux.alibaba.com,
	ziy@...dia.com,
	dave@...olabs.net,
	nifan.cxl@...il.com,
	joshua.hahnjy@...il.com,
	xuezhengchu@...wei.com,
	yiannis@...corp.com,
	akpm@...ux-foundation.org,
	david@...hat.com
Subject: Re: [RFC PATCH v0 0/2] Batch migration for NUMA balancing

On Mon, 26 May 2025 10:50:02 +0530 Bharata B Rao <bharata@....com> wrote:

> Hi SJ,
> 
> On 22-May-25 12:15 AM, SeongJae Park wrote:
> > Hi Bharata,
> > 
> > On Wed, 21 May 2025 13:32:36 +0530 Bharata B Rao <bharata@....com> wrote:
> > 
> >> Hi,
> >>
> >> This is an attempt to convert the NUMA balancing to do batched
> >> migration instead of migrating one folio at a time. The basic
> >> idea is to collect (from hint fault handler) the folios to be
> >> migrated in a list and batch-migrate them from task_work context.
> >> More details about the specifics are present in patch 2/2.
> >>
> >> During LSFMM[1] and subsequent discussions in MM alignment calls[2],
> >> it was suggested that separate migration threads to handle migration
> >> or promotion request may be desirable. Existing NUMA balancing, hot
> >> page promotion and other future promotion techniques could off-load
> >> migration part to these threads. Or if we manage to have a single
> >> source of hotness truth like kpromoted[3], then that too can hand
> >> over migration requests to the migration threads. I am envisaging
> >> that different hotness sources like kmmscand[4], MGLRU[5], IBS[6]
> >> and CXL HMU would push hot page info to kpromoted, which would
> >> then isolate and push the folios to be promoted to the migrator
> >> thread.
> > 
> > I think (or, hope) it would also be not very worthless or rude to mention other
> > existing and ongoing works that have potentials to serve for similar purpose or
> > collaborate in future, here.
> > 
> > DAMON is designed for a sort of multi-source access information handling.  In
> > LSFMM, I proposed[1] damon_report_access() interface for making it easier to be
> > extended for more types of access information.  Currenlty damon_report_access()
> > is under early development.  I think this has a potential to serve something
> > similar to your single source goal.
> > 
> > Also in LSFMM, I proposed damos_add_folio() for a case that callers want to
> > utilize DAMON worker thread (kdamond) as an asynchronous memory
> > management operations execution thread while using its other features such as
> > [auto-tuned] quotas.  I think this has a potential to serve something similar
> > to your migration threads.  I haven't started damos_add_folio() development
> > yet, though.
> > 
> > I remember we discussed about DAMON on mailing list and in LSFMM a bit, on your
> > session.  IIRC, you were also looking for a time to see if there is a chance to
> > use DAMON in some way.  Due to the technical issue, we were unable to discuss
> > on the two new proposals on my LSFMM session, and it has been a bit while since
> > our last discussion.  So if you don't mind, I'd like to ask if you have some
> > opinions or comments about these.
> > 
> > [1] https://lwn.net/Articles/1016525/
> 
> Since this patchset was just about making the migration batched and 
> async for NUMAB, I didn't mention DAMON as an alternative here.

I was thinking a clarification like this could be useful for readers though,
since you were mentioning the future work together.  Thank you for clarifying.

> 
> One of the concerns I always had about DAMON when it is considered as 
> replacement for existing hot page migration is its current inability to 
> gather and maintain hot page info at per-folio granularity.

I think this is a very valid concern.  But I don't think DAMON should be a
_replacement_.  Rather, I'm looking for a chance to make existing approaches
help each other.  For example, I recommend running DAMON-based memory
tiering[1] together with the LRU-based demotion.  I think there is no reason to
discourage using it together with NUMAB-2 based promotion, if the folio
granularity is a real issue.  That is, still NUMAB-2 will do synchronous
promotion, but DAMON will do it asynchronously, so the amount of synchronous
promotions and its overhead will reduce.

I didn't encourage using NUMB-2 based promotion together with DAMON-based
memory tiering[1] not because I show a problem at such co-usage, but just
because I found no clear benefit of that from my test setup.  In theory, I
think running those together makes sense.

That said, we're also making efforts for overcoming the folio-granularity issue
on DAMON side, too.  We implemented page-level filters that motivated by SK
hynix' test results, and developed monitoring intervals auto-tuning for overall
monitoring results accuracy.  We proposed damon_report_access() and
damos_add_folios() as yet another opportunitis to better deal with the issue.
I was curious about your opinion to damon_report_access() and
damos_add_folios() for the reason.  I understand that could be out of the scope
of this patch series, though.

> How much 
> that eventually matters to the workloads has to be really seen.

Cannot agree more.  Nonetheless, as mentioned abovely, my test setup[1] didn't
show the problem.  That said, I'm not really convinced with my test setup, and
I don't think the test setup is good for verifying the problem.  Hence I'm
trying to make a better test setup for this.  I'll share more of the new setup
if I make some progress.  I will also be more than happy to learn about other's
test setup if they have a good one or suggestions.

[1] https://lore.kernel.org/20250420194030.75838-1-sj@kernel.org


Thanks,
SJ

[...]

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ