[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF8kJuPgNyWF5ZWccnz1KRCtqsiXRy_U-LcQxJ3jnSH2eQq-xw@mail.gmail.com>
Date: Fri, 15 Aug 2025 15:34:27 -0700
From: Chris Li <chrisl@...nel.org>
To: Nhat Pham <nphamcs@...il.com>
Cc: SeongJae Park <sj@...nel.org>, Andrew Morton <akpm@...ux-foundation.org>,
Chengming Zhou <chengming.zhou@...ux.dev>, David Hildenbrand <david@...hat.com>,
Johannes Weiner <hannes@...xchg.org>, Yosry Ahmed <yosry.ahmed@...ux.dev>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, Takero Funaki <flintglass@...il.com>, Hugh Dickins <hughd@...gle.com>
Subject: Re: [PATCH v2] mm/zswap: store <PAGE_SIZE compression failed page as-is
On Wed, Aug 13, 2025 at 11:33 AM Nhat Pham <nphamcs@...il.com> wrote:
> > I know Hugh has some idea to store incompressible pages in the swap
> > cache as well. Hugh?
>
> I've also proposed that approach internally - keeping the page in
> swapcache, while adding them to the zswap LRU for writeback to disk
> (and so that we do not consider them for zswap again in the future).
>
> But after a while, we decided against it, mostly due to the complexity
> of the solution. On the zswap side, we need to distinguish between the
Google actually has an internal patch to keep incompressible pages in
separate LRU out of zswap. But that breaks the zswap LRU order as
well. If there is interest and I can find the time, I can send it out
for note comparison purposes. I do see the value of maintaining the
LRU in the zswap tier as a whole.
> ordinary struct zswap_entry and the struct page on zswap's LRU list.
> Externally, we need to handle moving a page currently in the zswap LRU
> to the main memory anon LRUs too.
>
> Migration is another concern. Zswap needs to be notified that the
> "backend" of a zswap entry has changed underneath it. Not impossible,
> but again that's just more surgery.
Ack. We might need to get that operation inside zsmalloc.
>
> So we decided to start with a simple solution (this one), and iterate
> as issues cropped up. At least then, we have production justifications
> for any future improvements.
Ack.
Chris
Powered by blists - more mailing lists