[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20251117-swap-table-p2-v2-5-37730e6ea6d5@tencent.com>
Date: Mon, 17 Nov 2025 02:11:46 +0800
From: Kairui Song <ryncsn@...il.com>
To: linux-mm@...ck.org
Cc: Andrew Morton <akpm@...ux-foundation.org>, Baoquan He <bhe@...hat.com>,
Barry Song <baohua@...nel.org>, Chris Li <chrisl@...nel.org>,
Nhat Pham <nphamcs@...il.com>, Yosry Ahmed <yosry.ahmed@...ux.dev>,
David Hildenbrand <david@...nel.org>, Johannes Weiner <hannes@...xchg.org>,
Youngjun Park <youngjun.park@....com>, Hugh Dickins <hughd@...gle.com>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
Ying Huang <ying.huang@...ux.alibaba.com>,
Kemeng Shi <shikemeng@...weicloud.com>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
linux-kernel@...r.kernel.org, Kairui Song <kasong@...cent.com>
Subject: [PATCH v2 05/19] mm, swap: simplify the code and reduce indention
From: Kairui Song <kasong@...cent.com>
Now swap cache is always used, multiple swap cache checks are no longer
useful, remove them and reduce the code indention.
No behavior change.
Signed-off-by: Kairui Song <kasong@...cent.com>
---
mm/memory.c | 89 +++++++++++++++++++++++++++++--------------------------------
1 file changed, 43 insertions(+), 46 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index 806becd6bad5..5bd417ae7f26 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4761,55 +4761,52 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
goto out_release;
page = folio_file_page(folio, swp_offset(entry));
- if (swapcache) {
- /*
- * Make sure folio_free_swap() or swapoff did not release the
- * swapcache from under us. The page pin, and pte_same test
- * below, are not enough to exclude that. Even if it is still
- * swapcache, we need to check that the page's swap has not
- * changed.
- */
- if (unlikely(!folio_matches_swap_entry(folio, entry)))
- goto out_page;
-
- if (unlikely(PageHWPoison(page))) {
- /*
- * hwpoisoned dirty swapcache pages are kept for killing
- * owner processes (which may be unknown at hwpoison time)
- */
- ret = VM_FAULT_HWPOISON;
- goto out_page;
- }
-
- /*
- * KSM sometimes has to copy on read faults, for example, if
- * folio->index of non-ksm folios would be nonlinear inside the
- * anon VMA -- the ksm flag is lost on actual swapout.
- */
- folio = ksm_might_need_to_copy(folio, vma, vmf->address);
- if (unlikely(!folio)) {
- ret = VM_FAULT_OOM;
- folio = swapcache;
- goto out_page;
- } else if (unlikely(folio == ERR_PTR(-EHWPOISON))) {
- ret = VM_FAULT_HWPOISON;
- folio = swapcache;
- goto out_page;
- }
- if (folio != swapcache)
- page = folio_page(folio, 0);
+ /*
+ * Make sure folio_free_swap() or swapoff did not release the
+ * swapcache from under us. The page pin, and pte_same test
+ * below, are not enough to exclude that. Even if it is still
+ * swapcache, we need to check that the page's swap has not
+ * changed.
+ */
+ if (unlikely(!folio_matches_swap_entry(folio, entry)))
+ goto out_page;
+ if (unlikely(PageHWPoison(page))) {
/*
- * If we want to map a page that's in the swapcache writable, we
- * have to detect via the refcount if we're really the exclusive
- * owner. Try removing the extra reference from the local LRU
- * caches if required.
+ * hwpoisoned dirty swapcache pages are kept for killing
+ * owner processes (which may be unknown at hwpoison time)
*/
- if ((vmf->flags & FAULT_FLAG_WRITE) && folio == swapcache &&
- !folio_test_ksm(folio) && !folio_test_lru(folio))
- lru_add_drain();
+ ret = VM_FAULT_HWPOISON;
+ goto out_page;
}
+ /*
+ * KSM sometimes has to copy on read faults, for example, if
+ * folio->index of non-ksm folios would be nonlinear inside the
+ * anon VMA -- the ksm flag is lost on actual swapout.
+ */
+ folio = ksm_might_need_to_copy(folio, vma, vmf->address);
+ if (unlikely(!folio)) {
+ ret = VM_FAULT_OOM;
+ folio = swapcache;
+ goto out_page;
+ } else if (unlikely(folio == ERR_PTR(-EHWPOISON))) {
+ ret = VM_FAULT_HWPOISON;
+ folio = swapcache;
+ goto out_page;
+ } else if (folio != swapcache)
+ page = folio_page(folio, 0);
+
+ /*
+ * If we want to map a page that's in the swapcache writable, we
+ * have to detect via the refcount if we're really the exclusive
+ * owner. Try removing the extra reference from the local LRU
+ * caches if required.
+ */
+ if ((vmf->flags & FAULT_FLAG_WRITE) &&
+ !folio_test_ksm(folio) && !folio_test_lru(folio))
+ lru_add_drain();
+
folio_throttle_swaprate(folio, GFP_KERNEL);
/*
@@ -4999,7 +4996,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
pte, pte, nr_pages);
folio_unlock(folio);
- if (folio != swapcache && swapcache) {
+ if (unlikely(folio != swapcache)) {
/*
* Hold the lock to avoid the swap entry to be reused
* until we take the PT lock for the pte_same() check
@@ -5037,7 +5034,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
folio_unlock(folio);
out_release:
folio_put(folio);
- if (folio != swapcache && swapcache) {
+ if (folio != swapcache) {
folio_unlock(swapcache);
folio_put(swapcache);
}
--
2.51.2
Powered by blists - more mailing lists