[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJD7tkaWyGWOArSEEUOLS4EQwdVycNrbvfHK-2rkixBQ8-h+3w@mail.gmail.com>
Date: Mon, 5 Aug 2024 17:14:55 -0700
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Nhat Pham <nphamcs@...il.com>
Cc: akpm@...ux-foundation.org, hannes@...xchg.org, shakeel.butt@...ux.dev,
linux-mm@...ck.org, kernel-team@...a.com, linux-kernel@...r.kernel.org,
flintglass@...il.com, chengming.zhou@...ux.dev
Subject: Re: [PATCH v3 2/2] zswap: track swapins from disk more accurately
On Mon, Aug 5, 2024 at 4:22 PM Nhat Pham <nphamcs@...il.com> wrote:
>
> Currently, there are a couple of issues with our disk swapin tracking
> for dynamic zswap shrinker heuristics:
>
> 1. We only increment the swapin counter on pivot pages. This means we
> are not taking into account pages that also need to be swapped in,
> but are already taken care of as part of the readahead window.
>
> 2. We are also incrementing when the pages are read from the zswap pool,
> which is inaccurate.
>
> This patch rectifies these issues by incrementing the counter whenever
> we need to perform a non-zswap read. Note that we are slightly
> overcounting, as a page might be read into memory by the readahead
> algorithm even though it will not be neeeded by users - however, this is
needed*
> an acceptable inaccuracy, as the readahead logic itself will adapt to
> these kind of scenarios.
>
> To test this change, I built the kernel under a cgroup with its
> memory.max set to 2 GB:
>
> real: 236.66s
> user: 4286.06s
> sys: 652.86s
> swapins: 81552
>
> For comparison, with just the new second chance algorithm, the build
> time is as follows:
>
> real: 244.85s
> user: 4327.22s
> sys: 664.39s
> swapins: 94663
>
> Without neither:
>
> real: 263.89s
> user: 4318.11s
> sys: 673.29s
> swapins: 227300.5
>
> (average over 5 runs)
>
> With this change, the kernel CPU time reduces by a further 1.7%, and
> the real time is reduced by another 3.3%, compared to just the second
> chance algorithm by itself. The swapins count also reduces by another
> 13.85%.
>
> Combinng the two changes, we reduce the real time by 10.32%, kernel CPU
Combining*
> time by 3%, and number of swapins by 64.12%.
>
> To gauge the new scheme's ability to offload cold data, I ran another
> benchmark, in which the kernel was built under a cgroup with memory.max
> set to 3 GB, but with 0.5 GB worth of cold data allocated before each
> build (in a shmem file).
>
> Under the old scheme:
>
> real: 197.18s
> user: 4365.08s
> sys: 289.02s
> zswpwb: 72115.2
>
> Under the new scheme:
>
> real: 195.8s
> user: 4362.25s
> sys: 290.14s
> zswpwb: 87277.8
>
> (average over 5 runs)
>
> Notice that we actually observe a 21% increase in the number of written
> back pages - so the new scheme is just as good, if not better at
> offloading pages from the zswap pool when they are cold. Build time
> reduces by around 0.7% as a result.
>
> Fixes: b5ba474f3f51 ("zswap: shrink zswap pool based on memory pressure")
> Suggested-by: Johannes Weiner <hannes@...xchg.org>
> Signed-off-by: Nhat Pham <nphamcs@...il.com>
> ---
> mm/page_io.c | 11 ++++++++++-
> mm/swap_state.c | 8 ++------
> 2 files changed, 12 insertions(+), 7 deletions(-)
>
> diff --git a/mm/page_io.c b/mm/page_io.c
> index ff8c99ee3af7..0004c9fbf7e8 100644
> --- a/mm/page_io.c
> +++ b/mm/page_io.c
> @@ -521,7 +521,15 @@ void swap_read_folio(struct folio *folio, struct swap_iocb **plug)
>
> if (zswap_load(folio)) {
> folio_unlock(folio);
> - } else if (data_race(sis->flags & SWP_FS_OPS)) {
> + goto finish;
> + }
> +
> + /*
> + * We have to read the page from slower devices. Increase zswap protection.
> + */
Can we fit this on a single line?
Anyway, LGTM:
Acked-by: Yosry Ahmed <yosryahmed@...gle.com>
Powered by blists - more mailing lists