lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKEwX=MWU0An-Y3rwncSZ24h2Yb=RdzP7FaHr8uiAYx7MTFiAA@mail.gmail.com>
Date: Mon, 1 Dec 2025 15:37:29 -0800
From: Nhat Pham <nphamcs@...il.com>
To: Chris Li <chrisl@...nel.org>
Cc: Rik van Riel <riel@...riel.com>, Johannes Weiner <hannes@...xchg.org>, 
	Andrew Morton <akpm@...ux-foundation.org>, Kairui Song <kasong@...cent.com>, 
	Kemeng Shi <shikemeng@...weicloud.com>, Baoquan He <bhe@...hat.com>, 
	Barry Song <baohua@...nel.org>, Yosry Ahmed <yosry.ahmed@...ux.dev>, 
	Chengming Zhou <chengming.zhou@...ux.dev>, linux-mm@...ck.org, linux-kernel@...r.kernel.org, 
	pratmal@...gle.com, sweettea@...gle.com, gthelen@...gle.com, 
	weixugc@...gle.com
Subject: Re: [PATCH RFC] mm: ghost swapfile support for zswap

On Sat, Nov 29, 2025 at 12:38 PM Chris Li <chrisl@...nel.org> wrote:
>
> On Sat, Nov 29, 2025 at 12:46 AM Nhat Pham <nphamcs@...il.com> wrote:
> >
> > On Thu, Nov 27, 2025 at 11:10 AM Chris Li <chrisl@...nel.org> wrote:
> > >
> > > On Thu, Nov 27, 2025 at 6:28 AM Rik van Riel <riel@...riel.com> wrote:
> > > >
> > > > Sorry, I am talking about upstream.
> > >
> > > So far I have not had a pleasant upstream experience when submitting
> > > this particular patch to upstream.
> > >
> > > > I really appreciate anybody participating in Linux
> > > > kernel development. Linux is good because different
> > > > people bring different perspectives to the table.
> > >
> > > Of course everybody is welcome. However, NACK without technical
> > > justification is very bad for upstream development. I can't imagine
> > > what a new hacker would think after going through what I have gone
> > > through for this patch. He/she will likely quit contributing upstream.
> > > This is not the kind of welcome we want.
> > >
> > > Nhat needs to be able to technically justify his NACK as a maintainer.
> > > Sorry there is no other way to sugar coat it.
> >
> > I am NOT the only zswap maintainer who expresses concerns. Other
> > people also have their misgivings, so I have let them speak and not
> > put words in their mouths.
>
> You did not mention the fact that both two NACK from zswap maintainers
> are from the same company. I assume you have some kind of team sync.
> There is a term for that, called "person acting in concert".

I mean, Yosry pointed out issues with your approach too. Yosry is from
your company, no?

The issues I pointed out have all been technical, thus far. I never
even brought up Meta - I'm sure other parties have the same issues.

>
> What I mean in "technically unjustifiable" is that VS patch series is
> a non-starter to merge into mainline.
> In this email you suggest the per swap slot memory overhead is 48
> bytes previously 64 bytes.
>
> https://lore.kernel.org/linux-mm/CAKEwX=Mea5V6CKcGuQrYfCQAKErgbje1s0fThjkgCwZXgF-d2A@mail.gmail.com/
>
> Do you have newer VS that significantly reduce that? If so, what is
> the new number?
>
> The starting point before your VS is 11 bytes (3 bytes static, 8 bytes
> dynamic). 48bytes is more than 4x the original size.
> This will have a huge impact on the deployment that uses a lot of
> swap. The worst part is that once your VS series is in the kernel.
> That overhead is always on, it is forcing the overhead even if the
> redirection is not used. This will hurt Google's fleet very badly if
> deployed. Because of the same jobs, the kernel memory consumption will
> jump up and fail jobs. Every body's kernel who use swap will suffer
> because it is always on. The alternative, the swap table, uses much
> less overhead. So your VS leave money on the table.
>
> So I consider your VS is a non-starter. I repeatedly call you out
> because you keep dodging this critical question. Johannes refers to
> you for the detail value of the overhead as well.  Dodging critical
> questions makes a technical debate very difficult to conduct and drive
> to a conflict resolution impossible. BTW, this is my big concern on
> the 2023 swap abstraction talk which our VS is based on. The community
> feedback at the time strongly favored my solution. I don't understand
> why you reboot the community un-favored solution without addressing
> those concerns.

I reboot the VS work because I have not seen any indications that your
design could solve the problems I believe are principle for any swap
architectures: dynamicization of swap space, efficient backend
transfer, to name 2.

>
> The other part of the bad experience is that you NACK first then ask
> clarifying questions later. The proper order is the other way around.
> You should fully understand the subject BEFORE you NACK on it. NACK is
> a very serious business.
>
> I did try my best to answer clarification question from your team. I
> appreciate that Johannes and Yosry ask clarification to advance the
> discussion. I did not see more question from them I assume they got
> what they want to know. If you still feel something is missing out,
> you should ask a follow up question for the part in which you need
> more clarification. We can repeat until you understand. You keep using
> the phrase "hand waving" as if I am faking it. That is FUD.
> Communication is a two way street. I can't force you to understand,
> asking more questions can help you. This is complex problem. I am
> confident I can explain to Kairui and he can understand, because he
> has a lot more context, not because I am faking it. Ask nicely so I
> can answer nicely. Stay in the technical side of the discussion
> please.
>
> So I consider using VS to NACK my patch is technically unjustifiable.

I'm not NACK-ing the ghost swapfile because of VS. I'm NACK-ing
swapfile because of the technical requirements I pointed out above.
Virtual swap happens to neatly solve all of them, by design, from
first principle. I never ruled out the possibility of another design
that would satisfy all of them - I just did not see enough from you to
believe otherwise.

I don't believe a static ghosttfile is it. In fact, you CAN
theoretically implement virtual swap with a ghost swapfile as well.
The staticity will just make it operationally untenable. The next step
would be to dynamicize the swap infrastructure, at which point we
revert back to the original VS design.

I see the same thing played out in your response as well, with the
redirection entry, then frontend/backend swap space. It's starting to
eerily resembles virtual swap. Or maybe you can clarify?

> Your current VS with 48 byte overhead is not usable at all as an
> standard upstream kernel. Can we agree to that?

Sure, which is why I sent it as an RFC and not as an actual patch
series pending merging :) Its main purpose was to demonstrate the
workflow of how a feature-complete virtual swap subsystem might
behave, in all of the code paths of the memory subsystem. I can then
optimize the fields piecemeal, while weighing the tradeoff (such as
lock granularity v.s lock fields memory overhead). You and Kairui are
welcome to criticize, comment, and help me optimize it, as did Yosry
and Johannes in the past.

>
> As we all know, using less memory to function the same is a lot harder
> than using more. If you can dramatically reduce the memory usage, you

I don't necessarily disagree.

I would, however, would like to point out that the reverse is true too
- you can't necessarily compare the overhead of two designs, where one
achieve a lot more in terms of features and/or operational goals than
the other.

> likely need to rebuild the whole patch series from scratch. If might
> force you to use solution similar to swap table, in that case why not
> join team swap table?

Because even with the current swap table design, the allocator is
*still* static.

I would LOVE to use the current physical swap allocation
infrastructure. It just doesn't work in its current state.

> We can reopen the topic again by then if you have a newer VS:

Sure.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ