[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240120024007.2850671-2-yosryahmed@google.com>
Date: Sat, 20 Jan 2024 02:40:06 +0000
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Johannes Weiner <hannes@...xchg.org>, Nhat Pham <nphamcs@...il.com>, Chris Li <chrisl@...nel.org>,
Chengming Zhou <zhouchengming@...edance.com>, Huang Ying <ying.huang@...el.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Yosry Ahmed <yosryahmed@...gle.com>
Subject: [PATCH 1/2] mm: swap: update inuse_pages after all cleanups are done
In swap_range_free(), we update inuse_pages then do some cleanups (arch
invalidation, zswap invalidation, swap cache cleanups, etc). During
swapoff, try_to_unuse() uses inuse_pages to make sure all swap entries
are freed. Make sure we only update inuse_pages after we are done with
the cleanups.
In practice, this shouldn't matter, because swap_range_free() is called
with the swap info lock held, and the swapoff code will spin for that
lock after try_to_unuse() anyway.
The goal is to make it obvious and more future proof that once
try_to_unuse() returns, all cleanups are done. This also facilitates a
following zswap cleanup patch which uses this fact to simplify
zswap_swapoff().
Signed-off-by: Yosry Ahmed <yosryahmed@...gle.com>
---
mm/swapfile.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 556ff7347d5f0..2fedb148b9404 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -737,8 +737,6 @@ static void swap_range_free(struct swap_info_struct *si, unsigned long offset,
if (was_full && (si->flags & SWP_WRITEOK))
add_to_avail_list(si);
}
- atomic_long_add(nr_entries, &nr_swap_pages);
- WRITE_ONCE(si->inuse_pages, si->inuse_pages - nr_entries);
if (si->flags & SWP_BLKDEV)
swap_slot_free_notify =
si->bdev->bd_disk->fops->swap_slot_free_notify;
@@ -752,6 +750,8 @@ static void swap_range_free(struct swap_info_struct *si, unsigned long offset,
offset++;
}
clear_shadow_from_swap_cache(si->type, begin, end);
+ atomic_long_add(nr_entries, &nr_swap_pages);
+ WRITE_ONCE(si->inuse_pages, si->inuse_pages - nr_entries);
}
static void set_cluster_next(struct swap_info_struct *si, unsigned long next)
--
2.43.0.429.g432eaa2c6b-goog
Powered by blists - more mailing lists