[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANeU7QkdVX0rYZS+QLv58L+zP5ZrHiGjrhxjMuA21o++QTW1nA@mail.gmail.com>
Date: Mon, 22 Jan 2024 14:31:46 -0800
From: Chris Li <chrisl@...nel.org>
To: Yosry Ahmed <yosryahmed@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, Johannes Weiner <hannes@...xchg.org>,
Nhat Pham <nphamcs@...il.com>, Chengming Zhou <zhouchengming@...edance.com>,
Huang Ying <ying.huang@...el.com>, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] mm: zswap: remove unnecessary tree cleanups in zswap_swapoff()
On Fri, Jan 19, 2024 at 6:40 PM Yosry Ahmed <yosryahmed@...gle.com> wrote:
>
> During swapoff, try_to_unuse() makes sure that zswap_invalidate() is
> called for all swap entries before zswap_swapoff() is called. This means
> that all zswap entries should already be removed from the tree. Simplify
> zswap_swapoff() by removing the tree cleanup loop, and leaving an
> assertion in its place.
>
> Signed-off-by: Yosry Ahmed <yosryahmed@...gle.com>
> ---
> Chengming, Chris, I think this should make the tree split and the xarray
> conversion patches simpler (especially the former). If others agree,
> both changes can be rebased on top of this.
I was wondering why those need to be there if all the zswap entries
should have been swapped in already. In my testing I never see this
delete of an entry so
think this is kind of just in case. Nice clean up and will help
simplify my zswap to xarray patch. Thanks for doing this.
Acked-by: Chris Li <chrisl@...nel.org> (Google)
Chris
> ---
> mm/zswap.c | 9 ++-------
> 1 file changed, 2 insertions(+), 7 deletions(-)
>
> diff --git a/mm/zswap.c b/mm/zswap.c
> index f8bc9e0892687..9675c3c27f9d1 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -1790,17 +1790,12 @@ void zswap_swapon(int type)
> void zswap_swapoff(int type)
> {
> struct zswap_tree *tree = zswap_trees[type];
> - struct zswap_entry *entry, *n;
>
> if (!tree)
> return;
>
> - /* walk the tree and free everything */
> - spin_lock(&tree->lock);
> - rbtree_postorder_for_each_entry_safe(entry, n, &tree->rbroot, rbnode)
> - zswap_free_entry(entry);
> - tree->rbroot = RB_ROOT;
> - spin_unlock(&tree->lock);
> + /* try_to_unuse() invalidated all entries already */
> + WARN_ON_ONCE(!RB_EMPTY_ROOT(&tree->rbroot));
> kfree(tree);
> zswap_trees[type] = NULL;
> }
> --
> 2.43.0.429.g432eaa2c6b-goog
>
Powered by blists - more mailing lists