[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250814162350.GH115258@cmpxchg.org>
Date: Thu, 14 Aug 2025 12:23:50 -0400
From: Johannes Weiner <hannes@...xchg.org>
To: SeongJae Park <sj@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Chengming Zhou <chengming.zhou@...ux.dev>,
David Hildenbrand <david@...hat.com>, Nhat Pham <nphamcs@...il.com>,
Yosry Ahmed <yosry.ahmed@...ux.dev>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, Takero Funaki <flintglass@...il.com>
Subject: Re: [PATCH v2] mm/zswap: store <PAGE_SIZE compression failed page
as-is
On Tue, Aug 12, 2025 at 10:00:46AM -0700, SeongJae Park wrote:
> When zswap writeback is enabled and it fails compressing a given page,
> the page is swapped out to the backing swap device. This behavior
> breaks the zswap's writeback LRU order, and hence users can experience
> unexpected latency spikes. If the page is compressed without failure,
> but results in a size of PAGE_SIZE, the LRU order is kept, but the
> decompression overhead for loading the page back on the later access is
> unnecessary.
>
> Keep the LRU order and optimize unnecessary decompression overheads in
> those cases, by storing the original content as-is in zpool. The length
> field of zswap_entry will be set appropriately, as PAGE_SIZE, Hence
> whether it is saved as-is or not (whether decompression is unnecessary)
> is identified by 'zswap_entry->length == PAGE_SIZE'.
>
> Because the uncompressed data is saved in zpool, same to the compressed
> ones, this introduces no change in terms of memory management including
> movability and migratability of involved pages.
>
> This change is also not increasing per zswap entry metadata overhead.
> But as the number of incompressible pages increases, total zswap
> metadata overhead is proportionally increased. The overhead should not
> be problematic in usual cases, since the zswap metadata for single zswap
> entry is much smaller than PAGE_SIZE, and in common zswap use cases
> there should be a sufficient amount of compressible pages. Also it can
> be mitigated by the zswap writeback.
>
> When the writeback is disabled, the additional overhead could be
> problematic. For the case, keep the current behavior that just returns
> the failure and let swap_writeout() put the page back to the active LRU
> list in the case.
>
> Knowing how many compression failures happened will be useful for future
> investigations. investigations. Add a new debugfs file, compress_fail,
> for the purpose.
>
> Tests
> -----
>
> I tested this patch using a simple self-written microbenchmark that is
> available at GitHub[1]. You can reproduce the test I did by executing
> run_tests.sh of the repo on your system. Note that the repo's
> documentation is not good as of this writing, so you may need to read
> and use the code.
>
> The basic test scenario is simple. Run a test program making artificial
> accesses to memory having artificial content under memory.high-set
> memory limit and measure how many accesses were made in given time.
>
> The test program repeatedly and randomly access three anonymous memory
> regions. The regions are all 500 MiB size, and accessed in the same
> probability. Two of those are filled up with a simple content that can
> easily be compressed, while the remaining one is filled up with a
> content that read from /dev/urandom, which is easy to fail at
> compressing to <PAGE_SIZE size. The program runs for two minutes and
> prints out the number of accesses made every five seconds.
>
> The test script runs the program under below seven configurations.
four?
> - 0: memory.high is set to 2 GiB, zswap is disabled.
> - 1-1: memory.high is set to 1350 MiB, zswap is disabled.
> - 1-2: On 1-1, zswap is enabled without this patch.
> - 1-3: On 1-2, this patch is applied.
I'm not sure 0 and 1-1 are super interesting, since we care about
zswap performance under pressure before and after the patch.
> For all zswap enabled cases, zswap shrinker is enabled.
It would, however, be good to test without the shrinker as well, since
it's currently optional and not enabled by default.
The performance with the shrinker looks great, but if it regresses for
setups without the shrinker we need additional gating.
Powered by blists - more mailing lists