[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aWF1uDdP75gOCGLm@gourry-fedora-PF4VCD3F>
Date: Fri, 9 Jan 2026 16:40:08 -0500
From: Gregory Price <gourry@...rry.net>
To: Yosry Ahmed <yosry.ahmed@...ux.dev>
Cc: linux-mm@...ck.org, cgroups@...r.kernel.org, linux-cxl@...r.kernel.org,
linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org, kernel-team@...a.com,
longman@...hat.com, tj@...nel.org, hannes@...xchg.org,
mkoutny@...e.com, corbet@....net, gregkh@...uxfoundation.org,
rafael@...nel.org, dakr@...nel.org, dave@...olabs.net,
jonathan.cameron@...wei.com, dave.jiang@...el.com,
alison.schofield@...el.com, vishal.l.verma@...el.com,
ira.weiny@...el.com, dan.j.williams@...el.com,
akpm@...ux-foundation.org, vbabka@...e.cz, surenb@...gle.com,
mhocko@...e.com, jackmanb@...gle.com, ziy@...dia.com,
david@...nel.org, lorenzo.stoakes@...cle.com,
Liam.Howlett@...cle.com, rppt@...nel.org, axelrasmussen@...gle.com,
yuanchu@...gle.com, weixugc@...gle.com, yury.norov@...il.com,
linux@...musvillemoes.dk, rientjes@...gle.com,
shakeel.butt@...ux.dev, chrisl@...nel.org, kasong@...cent.com,
shikemeng@...weicloud.com, nphamcs@...il.com, bhe@...hat.com,
baohua@...nel.org, chengming.zhou@...ux.dev,
roman.gushchin@...ux.dev, muchun.song@...ux.dev, osalvador@...e.de,
matthew.brost@...el.com, joshua.hahnjy@...il.com, rakie.kim@...com,
byungchul@...com, ying.huang@...ux.alibaba.com, apopple@...dia.com,
cl@...two.org, harry.yoo@...cle.com, zhengqi.arch@...edance.com
Subject: Re: [RFC PATCH v3 7/8] mm/zswap: compressed ram direct integration
On Fri, Jan 09, 2026 at 04:00:00PM +0000, Yosry Ahmed wrote:
> On Thu, Jan 08, 2026 at 03:37:54PM -0500, Gregory Price wrote:
>
> If the memory is byte-addressable, using it as a second tier makes it
> directly accessible without page faults, so the access latency is much
> better than a swapped out page in zswap.
>
> Are there some HW limitations that allow a node to be used as a backend
> for zswap but not a second tier?
>
Coming back around - presumably any compressed node capable of hosting a
proper tier would be compatible with zswap, but you might have hardware
which is sufficiently slow(er than dram, faster than storage) that using
it as a proper tier may be less efficient than incurring faults.
The standard I've been using is 500ns+ cacheline fetches, but this is
somewhat arbitrary. Even 500ns might be better than accessing multi-us
storage, but then when you add compression you might hit 600ns-1us.
This is besides the point, and apologies for the wall of text below,
feel free to skip this next section - writing out what hardware-specific
details I can share for the sake of completeness.
Some hardware details
=====================
The way every proposed piece of compressed memory hardware I have seen
would operate is essentially by lying about its capacity to the
operating system - and then providing mechanisms to determine when the
compression ratio becomes is dropping to dangerous levels.
Hardware Says : 8GB
Hardware Has : 1GB
Node Capacity : 8GB
The capacity numbers are static. Even with hotplug, they must be
considered static - because the runtime compression ratio can change.
If the device fails to achieve a 4:1 compression ratio, and real usage
starts to exceed real capacity - the system will fail.
(dropped writes, poisons, machine checks, etc).
We can mitigate this with strong write-controls and querying the device
for compression ratio data prior to actually migrating a page.
Why Zswap to start
==================
ZSwap is an existing, clean read and write control path control.
- We fault on all accesses.
- It otherwise uses system memory under the hood (kmalloc)
I decided to use zswap as a proving ground for the concept. While the
design in this patch is simplistic (and as you suggest below, can
clearly be improved), it demonstrates the entire concept:
on demotion:
- allocate a page from private memory
- ask the driver if it's safe to use
- if safe -> migrate
if unsafe -> fallback
on memory access:
- "promote" to a real page
- inform the driver the page has been released (zero or discard)
As you point out, the real value in byte-accessible memory is leaving
the memory mapped, the only difference on cram.c and zswap.c in the
above pattern would be:
on demotion:
- allocate a page from private memory
- ask the driver if it's safe to use
- if safe -> migrate and remap the page as RO in page tables
if unsafe
-> trigger reclaim on cram node
-> fallback to another demotion
on *write* access:
- promote to real page
- clean up the compressed page
> Or is the idea to make promotions from compressed memory to normal
> memory fault-driver instead of relying on page hotness?
>
> I also think there are some design decisions that need to be made before
> we commit to this, see the comments below for more.
>
100% agreed, i'm absolutely not locked into a design, this just gets the
ball rolling :].
> > /* RCU-protected iteration */
> > static LIST_HEAD(zswap_pools);
> > /* protects zswap_pools list modification */
> > @@ -716,7 +732,13 @@ static void zswap_entry_cache_free(struct zswap_entry *entry)
> > static void zswap_entry_free(struct zswap_entry *entry)
> > {
> > zswap_lru_del(&zswap_list_lru, entry);
> > - zs_free(entry->pool->zs_pool, entry->handle);
> > + if (entry->direct) {
> > + struct page *page = (struct page *)entry->handle;
>
> Would it be cleaner to add a union in zswap_entry that has entry->handle
> and entry->page?
>
Absolutely. Ack.
> > + /* Skip nodes we've already tried and failed */
> > + if (node_isset(nid, tried_nodes))
> > + continue;
>
> Why do we need this? Does for_each_node_mask() iterate each node more
> than once?
>
This is just me being stupid, i will clean this up. I think i wrote
this when i was using a _next nodemask variant that can loop around and
just left this in when i got it working.
> I think we can drop the 'found' label by moving things around, would
> this be simpler?
> for_each_node_mask(..) {
> ...
> ret = node_private_allocated(dst);
> if (!ret)
> break;
>
> __free_page(dst);
> dst = NULL;
> }
>
ack, thank you.
> So the CXL code tells zswap what nodes are usable, then zswap tries
> getting a page from these nodes and checking them using APIs provided by
> the CXL code.
>
> Wouldn't it be a better abstraction if the nodemask lived in the CXL
> code and an API was exposed to zswap just to allocate a page to copy to?
> Or we can abstract the copy as well and provide an API that directly
> tries to copy the page to the compressible node.
>
> IOW move zswap_compress_direct() (probably under a different name?) and
> zswap_direct_nodes into CXL code since it's not really zswap logic.
>
> Also, I am not sure if the zswap_compress_direct() call and check would
> introduce any latency, since almost all existing callers will pay for it
> without benefiting.
>
> If we move the function into CXL code, we could probably have an inline
> wrapper in a header with a static key guarding it to make there is no
> overhead for existing users.
>
CXL is also the wrong place to put it - cxl is just one potential
source of such a node. We'd want that abstracted...
So this looks like a good use of memor-tiers.c - do dispatch there and
have it set static branches for various features on node registration.
struct page* mt_migrate_page_to(NODE_TYPE, src, &size);
-> on success return dst page and the size of the page on hardware
(target_size would address your accounting notes below)
Then have the migrate function in mt do all the node_private callbacks.
So that would limit the zswap internal change to
if (zswap_node_check()) { /* static branch check */
cpage = mt_migrate_page_to(NODE_PRIVATE_ZSWAP, src, &size);
if (compressed_page) {
entry->page_handle = cpage;
entry->length = size;
entry->direct = true;
return true;
}
}
/* Fallthrough */
ack. this is all great, thank you.
... snip ...
> > entry->length = size
>
> I don't think this works. Setting entry->length = PAGE_SIZE will cause a
> few problems, off the top of my head:
>
> 1. An entire page of memory will be charged to the memcg, so swapping
> out the page won't reduce the memcg usage, which will cause thrashing
> (reclaim with no progress when hitting the limit).
>
> Ideally we'd get the compressed length from HW and record it here to
> charge it appropriately, but I am not sure how we actually want to
> charge memory on a compressed node. Do we charge the compressed size as
> normal memory? Does it need separate charging and a separate limit?
>
> There are design discussions to be had before we commit to something.
I have a feeling tracking individual page usage would be way too
granular / inefficient, but I will consult with some folks on whether
this can be quieried. If so, we can add way to get that info.
node_private_page_size(page) -> returns device reported page size.
or work it directly into the migrate() call like above
--- assuming there isn't a way and we have to deal with fuzzy math ---
The goal should definitely be to leave the charging statistics the same
from the perspective of services - i.e zswap should charge a whole page,
because according to the OS it just used a whole page.
What this would mean is memcg would have to work with fuzzy data.
If 1GB is charged and the compression ratio is 4:1, reclaim should
operate (by way of callback) like it has used 256MB.
I think this is the best you can do without tracking individual pages.
>
> 2. The page will be incorrectly counted in
> zswap_stored_incompressible_pages.
>
If we can track individual page size, then we can fix that.
If we can't, then we'd need zswap_stored_direct_pages and to do the
accounting a bit differently. Probably want direct_pages accounting
anyway, so i might just add that.
> Aside from that, zswap_total_pages() will be wrong now, as it gets the
> pool size from zsmalloc and these pages are not allocated from zsmalloc.
> This is used when checking the pool limits and is exposed in stats.
>
This is ignorance of zswap on my part, and yeah good point. Will look
into this accounting a little more.
> > + memcpy_folio(folio, 0, zfolio, 0, PAGE_SIZE);
>
> Why are we using memcpy_folio() here but copy_mc_highpage() on the
> compression path? Are they equivalent?
>
both are in include/linux/highmem.h
I was avoiding page->folio conversions in the compression path because
I had a struct page already.
tl;dr: I'm still looking for the "right" way to do this. I originally
had a "HACK:" tag here previously but seems I definitely dropped it
prematurely.
(I also think this code can be pushed into mt_ or callbacks)
> > + if (entry->direct) {
> > + struct page *freepage = (struct page *)entry->handle;
> > +
> > + node_private_freed(freepage);
> > + __free_page(freepage);
> > + } else
> > + zs_free(pool->zs_pool, entry->handle);
>
> This code is repeated in zswap_entry_free(), we should probably wrap it
> in a helper that frees the private page or the zsmalloc entry based on
> entry->direct.
>
ack.
Thank you again for taking a look, this has been enlightening. Good
takeaways for the rest of the N_PRIVATE design.
I think we can minimize zswap changes even further given this.
~Gregory
Powered by blists - more mailing lists