[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240118151123.GH939255@cmpxchg.org>
Date: Thu, 18 Jan 2024 10:11:23 -0500
From: Johannes Weiner <hannes@...xchg.org>
To: Chengming Zhou <zhouchengming@...edance.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Yosry Ahmed <yosryahmed@...gle.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, Chris Li <chriscli@...gle.com>,
Nhat Pham <nphamcs@...il.com>
Subject: Re: [PATCH 2/2] mm/zswap: split zswap rb-tree
On Wed, Jan 17, 2024 at 09:23:19AM +0000, Chengming Zhou wrote:
> Each swapfile has one rb-tree to search the mapping of swp_entry_t to
> zswap_entry, that use a spinlock to protect, which can cause heavy lock
> contention if multiple tasks zswap_store/load concurrently.
>
> Optimize the scalability problem by splitting the zswap rb-tree into
> multiple rb-trees, each corresponds to SWAP_ADDRESS_SPACE_PAGES (64M),
> just like we did in the swap cache address_space splitting.
>
> Although this method can't solve the spinlock contention completely, it
> can mitigate much of that contention. Below is the results of kernel build
> in tmpfs with zswap shrinker enabled:
>
> linux-next zswap-lock-optimize
> real 1m9.181s 1m3.820s
> user 17m44.036s 17m40.100s
> sys 7m37.297s 4m54.622s
>
> So there are clearly improvements.
>
> Signed-off-by: Chengming Zhou <zhouchengming@...edance.com>
Acked-by: Johannes Weiner <hannes@...xchg.org>
One minor nit:
> @@ -265,6 +266,10 @@ static bool zswap_has_pool;
> * helpers and fwd declarations
> **********************************/
>
> +#define swap_zswap_tree(entry) \
> + (&zswap_trees[swp_type(entry)][swp_offset(entry) \
> + >> SWAP_ADDRESS_SPACE_SHIFT])
Make this a static inline function instead?
Powered by blists - more mailing lists