[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7e3e7327-9402-bb04-982e-0fb9419d1146@google.com>
Date: Tue, 16 Sep 2025 12:45:52 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Gregory Price <gourry@...rry.net>
cc: Matthew Wilcox <willy@...radead.org>, Bharata B Rao <bharata@....com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Jonathan.Cameron@...wei.com, dave.hansen@...el.com, hannes@...xchg.org,
mgorman@...hsingularity.net, mingo@...hat.com, peterz@...radead.org,
raghavendra.kt@....com, riel@...riel.com, sj@...nel.org,
weixugc@...gle.com, ying.huang@...ux.alibaba.com, ziy@...dia.com,
dave@...olabs.net, nifan.cxl@...il.com, xuezhengchu@...wei.com,
yiannis@...corp.com, akpm@...ux-foundation.org, david@...hat.com,
byungchul@...com, kinseyho@...gle.com, joshua.hahnjy@...il.com,
yuanchu@...gle.com, balbirs@...dia.com, alok.rathore@...sung.com
Subject: Re: [RFC PATCH v2 0/8] mm: Hot page tracking and promotion
infrastructure
On Wed, 10 Sep 2025, Gregory Price wrote:
> On Wed, Sep 10, 2025 at 04:39:16PM +0100, Matthew Wilcox wrote:
> > On Wed, Sep 10, 2025 at 08:16:45PM +0530, Bharata B Rao wrote:
> > > This patchset introduces a new subsystem for hot page tracking
> > > and promotion (pghot) that consolidates memory access information
> > > from various sources and enables centralized promotion of hot
> > > pages across memory tiers.
> >
> > Just to be clear, I continue to believe this is a terrible idea and we
> > should not do this. If systems will be built with CXL (and given the
> > horrendous performance, I cannot see why they would be), the kernel
> > should not be migrating memory around like this.
>
> I've been considered this problem from the opposite approach since LSFMM.
>
> Rather than decide how to move stuff around, what if instead we just
> decide not to ever put certain classes of memory on CXL. Right now, so
> long as CXL is in the page allocator, it's the wild west - any page can
> end up anywhere.
>
> I have enough data now from ZONE_MOVABLE-only CXL deployments on real
> workloads to show local CXL expansion is valuable and performant enough
> to be worth deploying - but the key piece for me is that ZONE_MOVABLE
> disallows GFP_KERNEL. For example: this keeps SLAB meta-data out of
> CXL, but allows any given user-driven page allocation (including page
> cache, file, and anon mappings) to land there.
>
This is similar to our use case, although the direct allocation can be
controlled by cpusets or mempolicies as needed depending on the memory
access latency required for the workload; nothing new there, though, it's
the same argument as NUMA in general and the abstraction of these far
memory nodes as separate NUMA nodes makes this very straightforward.
> I'm hoping to share some of this data in the coming months.
>
> I've yet to see any strong indication that a complex hotness/movement
> system is warranted (yet) - but that may simply be because we have
> local cards with no switching involved. So far LRU-based promotion and
> demotion has been sufficient.
>
To me, this is a key point. As we've discussed in meetings, we're in the
early days here. The CHMU does provide a lot of flexibility, both to
create very good and very bad hotness trackers. But I think the key point
is that we have multiple sources of hotness information depending on the
platform and some of these sources only make sense for the kernel (or a
BPF offload) to maintain as the source of truth. Some of these sources
will be clear-on-read so only one entity would be possible to have as the
source of truth of page hotness.
I've been pretty focused on the promotion story here rather than demotion
because of how responsive it needs to be. Harvesting the page table
accessed bits or waiting on a sliding window through NUMA Balancing (even
NUMAB=2) is not as responsive as needed for very fast promotion to top
tier memory, hence things like the CHMU (or PEBS or IBS etc).
A few things that I think we need to discuss and align on:
- the kernel as the source of truth for all memory hotness information,
which can then be abstracted and used for multiple downstream purposes,
memory tiering only being one of them
- the long-term plan for NUMAB=2 and memory tiering support in the kernel
in general, are we planning on supporting this through NUMA hint faults
forever despite their drawbacks (too slow, too much overhead for KVM)
- the role of the kernel vs userspace in driving the memory migration;
lots of discussion on hardware assists that can be leveraged for memory
migration but today the balancing is driven in process context. The
kthread as the driver of migration is yet to be a sold argument, but
are where a number of companies are currently looking
There's also some feature support that is possible with these CXL memory
expansion devices that have started to pop up in labs that can also
drastically reduce overall TCO. Perhaps Wei Xu, cc'd, will be able to
chime in as well.
This topic seems due for an alignment session as well, so will look to get
that scheduled in the coming weeks if people are up for it.
> It seems the closer to random-access the access pattern, the less
> valuable ANY movement is. Which should be intuitive. But, having
> CXL beats touching disk every day of the week.
>
> So I've become conflicted on this work - but only because I haven't seen
> the data to suggest such complexity is warranted.
>
> ~Gregory
>
Powered by blists - more mailing lists