lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CACePvbWV3STyCg0vYDXYg7asnxLTa4Jb5Fa59g7_QeTVxKV=ig@mail.gmail.com>
Date: Sun, 30 Nov 2025 00:38:38 +0400
From: Chris Li <chrisl@...nel.org>
To: Nhat Pham <nphamcs@...il.com>
Cc: Rik van Riel <riel@...riel.com>, Johannes Weiner <hannes@...xchg.org>, 
	Andrew Morton <akpm@...ux-foundation.org>, Kairui Song <kasong@...cent.com>, 
	Kemeng Shi <shikemeng@...weicloud.com>, Baoquan He <bhe@...hat.com>, 
	Barry Song <baohua@...nel.org>, Yosry Ahmed <yosry.ahmed@...ux.dev>, 
	Chengming Zhou <chengming.zhou@...ux.dev>, linux-mm@...ck.org, linux-kernel@...r.kernel.org, 
	pratmal@...gle.com, sweettea@...gle.com, gthelen@...gle.com, 
	weixugc@...gle.com
Subject: Re: [PATCH RFC] mm: ghost swapfile support for zswap

On Sat, Nov 29, 2025 at 12:46 AM Nhat Pham <nphamcs@...il.com> wrote:
>
> On Thu, Nov 27, 2025 at 11:10 AM Chris Li <chrisl@...nel.org> wrote:
> >
> > On Thu, Nov 27, 2025 at 6:28 AM Rik van Riel <riel@...riel.com> wrote:
> > >
> > > Sorry, I am talking about upstream.
> >
> > So far I have not had a pleasant upstream experience when submitting
> > this particular patch to upstream.
> >
> > > I really appreciate anybody participating in Linux
> > > kernel development. Linux is good because different
> > > people bring different perspectives to the table.
> >
> > Of course everybody is welcome. However, NACK without technical
> > justification is very bad for upstream development. I can't imagine
> > what a new hacker would think after going through what I have gone
> > through for this patch. He/she will likely quit contributing upstream.
> > This is not the kind of welcome we want.
> >
> > Nhat needs to be able to technically justify his NACK as a maintainer.
> > Sorry there is no other way to sugar coat it.
>
> I am NOT the only zswap maintainer who expresses concerns. Other
> people also have their misgivings, so I have let them speak and not
> put words in their mouths.

You did not mention the fact that both two NACK from zswap maintainers
are from the same company. I assume you have some kind of team sync.
There is a term for that, called "person acting in concert".

What I mean in "technically unjustifiable" is that VS patch series is
a non-starter to merge into mainline.
In this email you suggest the per swap slot memory overhead is 48
bytes previously 64 bytes.

https://lore.kernel.org/linux-mm/CAKEwX=Mea5V6CKcGuQrYfCQAKErgbje1s0fThjkgCwZXgF-d2A@mail.gmail.com/

Do you have newer VS that significantly reduce that? If so, what is
the new number?

The starting point before your VS is 11 bytes (3 bytes static, 8 bytes
dynamic). 48bytes is more than 4x the original size.
This will have a huge impact on the deployment that uses a lot of
swap. The worst part is that once your VS series is in the kernel.
That overhead is always on, it is forcing the overhead even if the
redirection is not used. This will hurt Google's fleet very badly if
deployed. Because of the same jobs, the kernel memory consumption will
jump up and fail jobs. Every body's kernel who use swap will suffer
because it is always on. The alternative, the swap table, uses much
less overhead. So your VS leave money on the table.

So I consider your VS is a non-starter. I repeatedly call you out
because you keep dodging this critical question. Johannes refers to
you for the detail value of the overhead as well.  Dodging critical
questions makes a technical debate very difficult to conduct and drive
to a conflict resolution impossible. BTW, this is my big concern on
the 2023 swap abstraction talk which our VS is based on. The community
feedback at the time strongly favored my solution. I don't understand
why you reboot the community un-favored solution without addressing
those concerns.

The other part of the bad experience is that you NACK first then ask
clarifying questions later. The proper order is the other way around.
You should fully understand the subject BEFORE you NACK on it. NACK is
a very serious business.

I did try my best to answer clarification question from your team. I
appreciate that Johannes and Yosry ask clarification to advance the
discussion. I did not see more question from them I assume they got
what they want to know. If you still feel something is missing out,
you should ask a follow up question for the part in which you need
more clarification. We can repeat until you understand. You keep using
the phrase "hand waving" as if I am faking it. That is FUD.
Communication is a two way street. I can't force you to understand,
asking more questions can help you. This is complex problem. I am
confident I can explain to Kairui and he can understand, because he
has a lot more context, not because I am faking it. Ask nicely so I
can answer nicely. Stay in the technical side of the discussion
please.

So I consider using VS to NACK my patch is technically unjustifiable.
Your current VS with 48 byte overhead is not usable at all as an
standard upstream kernel. Can we agree to that?

As we all know, using less memory to function the same is a lot harder
than using more. If you can dramatically reduce the memory usage, you
likely need to rebuild the whole patch series from scratch. If might
force you to use solution similar to swap table, in that case why not
join team swap table? We can reopen the topic again by then if you
have a newer VS:
1) address the per swap slot memory over head, ideally close to the
first principle value.
2) make the overhead optional, if not using redirection, preferably
not pay the overhead.
3) make your VS patch series incrementally show value, not all or nothing.

Sorry this email is getting very long and I have very limited time.
Let's discuss one topic at a time. I would like to conclude the
current VS is not a viable option as of now. I can reply to other
parts of your email once we get the VS out of the way.

Best Regards,

Chris




>
> 1. I don't like the operational overhead (to statically size the zswap
> swapfile size for each <host x workload> combination) of static
> swapfile. Misspecification of swapfile size can lead to unacceptable
> swap metadata overhead on small machines, or underutilization of zswap
> on big machines. And it is *impossible* to know how much zswap will be
> needed ahead of time, even if we fix host - it depends on workloads
> access patterns, memory compressibility, and latency/memory pressure
> tolerance.
>
> 2. I don't like the maintainer's overhead (to support a special
> infrastructure for a very specific use case, i.e no-writeback),
> especially since I'm not convinced this can be turned into a general
> architecture. See below.
>
> 3. I want to move us towards a more dynamic architecture for zswap.
> This is a step in the WRONG direction.
>
> 4. I don't believe this buys us anything we can't already do with
> userspace hacking. Again, zswap-over-zram (or insert whatever RAM-only
> swap option here), with writeback disabled, is 2-3 lines of script.
>
> I believe I already justified myself well enough :) It is you who have
> not really convinced me that this is, at the very least, a
> temporary/first step towards a long-term generalized architecture for
> zswap. Every time we pointed out an issue, you seem to justify it with
> some more vague ideas that deepen the confusion.
>
> Let's recap the discussion so far:
>
> 1. We claimed that this architecture is hard to extend for efficient
> zswap writeback, or backend transfer in general, without incurring
> page table updates. You claim you plan to implement a redirection
> entry to solve this.
>
> 2. We then pointed out that inserting redirect entry into the current
> physical swap infrastructure will leave holes in the upper swap tier's
> address space, which is arguably *worse* than the current status quo
> of zswap occupying disk swap space. Again, you pull out some vague
> ideas about "frontend" and "backend" swap, which, frankly, is
> conceptually very similar to swap virtualization.
>
> 3. The dynamicization of swap space is treated with the same rigor
> (or, more accurately, lack thereof). Just more handwaving about the
> "frontend" vs "backend" (which, again, is very close to swap
> virtualization). This requirement is a deal breaker for me - see
> requirement 1 above again.
>
> 4. We also pointed out your lack of thoughts for swapoff optimization,
> which again, seem to be missing in your design. Again, more vagueness
> about rmap, which is probably more overhead.
>
> Look man, I'm not being hostile to you. Believe me on this - I respect
> your opinion, and I'm working very hard on reducing memory overhead
> for virtual swap, to see if I can meet you where you want it to be.
> The RFC's original design inefficient memory usage was due to:
>
> a) Readability. Space optimization can make it hard to read code, when
> fields are squeezed into the same int/long variable. So I just put one
> different field for each piece of metadata information
>
> b) I was playing with synchronization optimization, i.e using atomics
> instead of locks, and using per-entry locks. But I can go back to
> using per-cluster lock (I haven't implemented cluster allocator at the
> time of the RFC, but in my latest version I have done it), which will
> further reduce the memory overhead by removing a couple of
> fields/packing more fields.
>
> The only non-negotiable per-swap-entry overhead will be a field to
> indicate the backend location (physical swap slot, zswap entry, etc.)
> + 2 bits to indicate the swap type. With some field union-ing magic,
> or pointer tagging magic, we can perhaps squeeze it even harder.
>
> I'm also working on reducing the CPU overhead - re-partitioning swap
> architectures (swap cache, zswap tree), reducing unnecessary xarray
> lookups where possible.
>
> We can then benchmark, and attempt to optimize it together as a community.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ