[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJD7tkYb-=Ho85e2AJbkfe-FmT6KXpJpUgPRaXQb5-+sY5j4Hg@mail.gmail.com>
Date: Thu, 21 Mar 2024 14:09:04 -0700
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Johannes Weiner <hannes@...xchg.org>, Nhat Pham <nphamcs@...il.com>,
Chengming Zhou <chengming.zhou@...ux.dev>, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] mm: zswap: remove nr_zswap_stored atomic
On Tue, Mar 19, 2024 at 7:08 PM Yosry Ahmed <yosryahmed@...gle.com> wrote:
>
> zswap_nr_stored is used to maintain the number of stored pages in zswap
> that are not same-filled pages. It is used in zswap_shrinker_count() to
> scale the number of freeable compressed pages by the compression ratio.
> That is, to reduce the amount of writeback from zswap with higher
> compression ratios as the ROI from IO diminishes.
>
> However, the need for this counter is questionable due to two reasons:
> - It is redundant. The value can be inferred from (zswap_stored_pages -
> zswap_same_filled_pages).
> - When memcgs are enabled, we use memcg_page_state(memcg,
> MEMCG_ZSWAPPED), which includes same-filled pages anyway (i.e.
> equivalent to zswap_stored_pages).
>
> Use zswap_stored_pages instead in zswap_shrinker_count() to keep things
> consistent whether memcgs are enabled or not, and add a comment about
> the number of freeable pages possibly being scaled down more than it
> should if we have lots of same-filled pages (i.e. inflated compression
> ratio).
>
> Remove nr_zswap_stored and one atomic operation in the store and free
> paths.
>
> Signed-off-by: Yosry Ahmed <yosryahmed@...gle.com>
Any thoughts on this patch? Should I resend it separately?
> ---
> mm/zswap.c | 11 ++++++-----
> 1 file changed, 6 insertions(+), 5 deletions(-)
>
> diff --git a/mm/zswap.c b/mm/zswap.c
> index 323f1dea43d22..ffcfce05a4408 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -181,8 +181,6 @@ struct zswap_pool {
>
> /* Global LRU lists shared by all zswap pools. */
> static struct list_lru zswap_list_lru;
> -/* counter of pages stored in all zswap pools. */
> -static atomic_t zswap_nr_stored = ATOMIC_INIT(0);
>
> /* The lock protects zswap_next_shrink updates. */
> static DEFINE_SPINLOCK(zswap_shrink_lock);
> @@ -880,7 +878,6 @@ static void zswap_entry_free(struct zswap_entry *entry)
> else {
> zswap_lru_del(&zswap_list_lru, entry);
> zpool_free(zswap_find_zpool(entry), entry->handle);
> - atomic_dec(&zswap_nr_stored);
> zswap_pool_put(entry->pool);
> }
> if (entry->objcg) {
> @@ -1305,7 +1302,7 @@ static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
> #else
> /* use pool stats instead of memcg stats */
> nr_backing = zswap_total_pages();
> - nr_stored = atomic_read(&zswap_nr_stored);
> + nr_stored = atomic_read(&zswap_stored_pages);
> #endif
>
> if (!nr_stored)
> @@ -1325,6 +1322,11 @@ static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
> * This ensures that the better zswap compresses memory, the fewer
> * pages we will evict to swap (as it will otherwise incur IO for
> * relatively small memory saving).
> + *
> + * The memory saving factor calculated here takes same-filled pages into
> + * account, but those are not freeable since they almost occupy no
> + * space. Hence, we may scale nr_freeable down a little bit more than we
> + * should if we have a lot of same-filled pages.
> */
> return mult_frac(nr_freeable, nr_backing, nr_stored);
> }
> @@ -1570,7 +1572,6 @@ bool zswap_store(struct folio *folio)
> if (entry->length) {
> INIT_LIST_HEAD(&entry->lru);
> zswap_lru_add(&zswap_list_lru, entry);
> - atomic_inc(&zswap_nr_stored);
> }
> spin_unlock(&tree->lock);
>
> --
> 2.44.0.291.gc1ea87d7ee-goog
>
Powered by blists - more mailing lists