[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <uo6vtumoy4txklyqy4njitf5ex4eanudncicbbzknmuowopd7v@jm4ao4qapiza>
Date: Wed, 13 Aug 2025 12:42:32 -0700
From: Shakeel Butt <shakeel.butt@...ux.dev>
To: Chris Li <chrisl@...nel.org>
Cc: SeongJae Park <sj@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>, Chengming Zhou <chengming.zhou@...ux.dev>,
David Hildenbrand <david@...hat.com>, Johannes Weiner <hannes@...xchg.org>,
Nhat Pham <nphamcs@...il.com>, Yosry Ahmed <yosry.ahmed@...ux.dev>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org, Takero Funaki <flintglass@...il.com>,
Hugh Dickins <hughd@...gle.com>
Subject: Re: [PATCH v2] mm/zswap: store <PAGE_SIZE compression failed page
as-is
On Wed, Aug 13, 2025 at 10:07:18AM -0700, Chris Li wrote:
>
> If you store uncompressed data in the zpool, zpool has metadata
> overhead, e.g. allocating the entry->handle for uncompressed pages.
> If the page is not compressed, another idea is just skip the zpool,
> store it as a page in the zswap entry as page. We can make a union of
> entry->handle and entry->incompressble_page. If entry->length ==
> PAGE_SIZE, use entry->incompressable_page as a page.
The main problem being solved here is to avoid the scenario where the
incompressible pages are being rotated in LRUs and zswapped multiple
times and wasting CPU on compressing incompressible pages. SJ's approach
solves the issue but with some memory overhead (zswap entry). With your
suggestion and to solve the mentioned issue, we will need to change some
core parts of reclaim (__remove_mapping()), LRU handling (swap cache
pages not in LRUs) and refault (putting such pages back in LRU and
should it handle read and write faults differently). So, the cons of
that approach is more complex code.
Personally I would prefer a simple solution with some overhead over a
more complicated and error prone solution without overhead. Or maybe you
have a more simplied approach instead?
Powered by blists - more mailing lists