[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c5enwlaui37lm4uxlsjbuhesy6hfwwqbxzzs77zn7kmsceojv3@f6tquznpmizu>
Date: Mon, 24 Nov 2025 10:09:37 +1100
From: Alistair Popple <apopple@...dia.com>
To: Gregory Price <gourry@...rry.net>
Cc: linux-mm@...ck.org, kernel-team@...a.com, linux-cxl@...r.kernel.org,
linux-kernel@...r.kernel.org, nvdimm@...ts.linux.dev, linux-fsdevel@...r.kernel.org,
cgroups@...r.kernel.org, dave@...olabs.net, jonathan.cameron@...wei.com,
dave.jiang@...el.com, alison.schofield@...el.com, vishal.l.verma@...el.com,
ira.weiny@...el.com, dan.j.williams@...el.com, longman@...hat.com,
akpm@...ux-foundation.org, david@...hat.com, lorenzo.stoakes@...cle.com,
Liam.Howlett@...cle.com, vbabka@...e.cz, rppt@...nel.org, surenb@...gle.com,
mhocko@...e.com, osalvador@...e.de, ziy@...dia.com, matthew.brost@...el.com,
joshua.hahnjy@...il.com, rakie.kim@...com, byungchul@...com, ying.huang@...ux.alibaba.com,
mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com,
vincent.guittot@...aro.org, dietmar.eggemann@....com, rostedt@...dmis.org,
bsegall@...gle.com, mgorman@...e.de, vschneid@...hat.com, tj@...nel.org,
hannes@...xchg.org, mkoutny@...e.com, kees@...nel.org, muchun.song@...ux.dev,
roman.gushchin@...ux.dev, shakeel.butt@...ux.dev, rientjes@...gle.com, jackmanb@...gle.com,
cl@...two.org, harry.yoo@...cle.com, axelrasmussen@...gle.com,
yuanchu@...gle.com, weixugc@...gle.com, zhengqi.arch@...edance.com,
yosry.ahmed@...ux.dev, nphamcs@...il.com, chengming.zhou@...ux.dev,
fabio.m.de.francesco@...ux.intel.com, rrichter@....com, ming.li@...omail.com, usamaarif642@...il.com,
brauner@...nel.org, oleg@...hat.com, namcao@...utronix.de, escape@...ux.alibaba.com,
dongjoo.seo1@...sung.com
Subject: Re: [RFC LPC2026 PATCH v2 00/11] Specific Purpose Memory NUMA Nodes
On 2025-11-22 at 08:07 +1100, Gregory Price <gourry@...rry.net> wrote...
> On Tue, Nov 18, 2025 at 06:02:02PM +1100, Alistair Popple wrote:
> >
> > I'm interested in the contrast with zone_device, and in particular why
> > device_coherent memory doesn't end up being a good fit for this.
> >
> > > - Why mempolicy.c and cpusets as-is are insufficient
> > > - SPM types seeking this form of interface (Accelerator, Compression)
> >
> > I'm sure you can guess my interest is in GPUs which also have memory some people
> > consider should only be used for specific purposes :-) Currently our coherent
> > GPUs online this as a normal NUMA noode, for which we have also generally
> > found mempolicy, cpusets, etc. inadequate as well, so it will be interesting to
> > hear what short comings you have been running into (I'm less familiar with the
> > Compression cases you talk about here though).
> >
>
> after some thought, talks, and doc readings it seems like the
> zone_device setups don't allow the CPU to map the devmem into page
> tables, and instead depends on migrate_device logic (unless the docs are
> out of sync with the code these days). That's at least what's described
> in hmm and migrate_device.
There are multiple types here (DEVICE_PRIVATE and DEVICE_COHERENT). The former
is mostly irrelevant for this discussion but I'm including the descriptions here
for completeness. You are correct in saying that the only way either of these
currently get mapped into the page tables is via explicit migration of memory
to ZONE_DEVICE by a driver. There is also a corner case for first touch handling
which allows drivers to establish mappings to zero pages on a device if the page
hasn't been populated previously on the CPU.
These pages can, in some sense at least, be mapped on the CPU. DEVICE_COHERENT
pages are mapped normally (ie. CPU can access these directly) where as
DEVICE_PRIVATE pages are mapped using special swap entries so drivers can
emulate coherence by migrating pages back. This is used by devices without
coherent interconnects (ie. PCIe) where as the former could be used by eg. CXL.
> Assuming this is out of date and ZONE_DEVICE memory is mappable into
> page tables, assuming you want sparse allocation, ZONE_DEVICE seems to
> suggest you at least have to re-implement the buddy logic (which isn't
> that tall of an ask).
That's basically what happens - GPU drivers need memory allocation and therefore
re-implement some form of memory allocator. Agree that just being able to
reuse the buddy logic probably isn't that compelling though and isn't really of
interest (hence some of my original questions on what this is about).
> But I could imagine an (overly simplistic) pattern with SPM Nodes:
>
> fd = open("/dev/gpu_mem", ...)
> buf = mmap(fd, ...)
> buf[0]
> 1) driver takes the fault
> 2) driver calls alloc_page(..., gpu_node, GFP_SPM_NODE)
> 3) driver manages any special page table masks
> Like marking pages RO/RW to manage ownership.
Of course as an aside this needs to match the CPU PTEs logic (this what
hmm_range_fault() is primarily used for).
> 4) driver sends the gpu the (mapping_id, pfn, index) information
> so that gpu can map the region in its page tables.
On coherent systems this often just uses HW address translation services
(ATS), although I think the specific implementation of how page-tables are
mirrored/shared is orthogonal to this.
> 5) since the memory is cache coherent, gpu and cpu are free to
> operate directly on the pages without any additional magic
> (except typical concurrency controls).
This is roughly how things work with DEVICE_PRIVATE/COHERENT memory today,
except in the case of DEVICE_PRIVATE in step (5) above. In that case the page is
mapped as a non-present special swap entry that triggers a driver callback due
to the lack of cache coherence.
> Driver doesn't have to do much in the way of allocationg management.
>
> This is probably less compelling since you don't want general purposes
> services like reclaim, migration, compaction, tiering - etc.
On at least some of our systems I'm told we do want this, hence my interest
here. Currently we have systems not using DEVICE_COHERENT and instead just
onlining everything as normal system managed memory in order to get reclaim
and tiering. Of course then people complain that it's managed as normal system
memory and non-GPU related things (ie. page-cache) end up in what's viewed as
special purpose memory.
> The value is clearly that you get to manage GPU memory like any other
> memory, but without worry that other parts of the system will touch it.
>
> I'm much more focused on the "I have memory that is otherwise general
> purpose, and wants services like reclaim and compaction, but I want
> strong controls over how things can land there in the first place".
So maybe there is some overlap here - what I have is memoy that we want managed
much like normal memory but with strong controls over what it can be used for
(ie. just for tasks utilising the processing element on the accelerator).
- Alistair
> ~Gregory
>
Powered by blists - more mailing lists