[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJD7tkZSY4SJ89VxCotkLoB6VWN1SexuOTeNTaUKxqqPwbDvFQ@mail.gmail.com>
Date: Tue, 23 Jan 2024 01:54:03 -0800
From: Yosry Ahmed <yosryahmed@...gle.com>
To: "Huang, Ying" <ying.huang@...el.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, Johannes Weiner <hannes@...xchg.org>,
Nhat Pham <nphamcs@...il.com>, Chris Li <chrisl@...nel.org>,
Chengming Zhou <zhouchengming@...edance.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] mm: swap: update inuse_pages after all cleanups are done
> Alternatively, we may just hold the spinlock in try_to_unuse() when we
> check si->inuse_pages at the end. This will also ensure that any calls
> to swap_range_free() have completed. Let me know what you prefer.
To elaborate, I mean replacing this patch and the memory barriers with
the diff below.
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 2fedb148b9404..9b932ecbd80a8 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -2046,6 +2046,7 @@ static int try_to_unuse(unsigned int type)
struct swap_info_struct *si = swap_info[type];
struct folio *folio;
swp_entry_t entry;
+ unsigned int inuse;
unsigned int i;
if (!READ_ONCE(si->inuse_pages))
@@ -2123,8 +2124,14 @@ static int try_to_unuse(unsigned int type)
* and even shmem_writepage() could have been preempted after
* folio_alloc_swap(), temporarily hiding that swap. It's easy
* and robust (though cpu-intensive) just to keep retrying.
+ *
+ * Read si->inuse_pages with the lock held to make sure that cleanups in
+ * swap_range_free() are completed when we read si->inuse_pages == 0.
*/
- if (READ_ONCE(si->inuse_pages)) {
+ spin_lock(&si->lock);
+ inuse = si->inuse_pages;
+ spin_unlock(&si->lock);
+ if (inuse) {
if (!signal_pending(current))
goto retry;
return -EINTR;
Powered by blists - more mailing lists