[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20221219185840.25441-5-ryncsn@gmail.com>
Date: Tue, 20 Dec 2022 02:58:40 +0800
From: Kairui Song <ryncsn@...il.com>
To: linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Miaohe Lin <linmiaohe@...wei.com>,
David Hildenbrand <david@...hat.com>,
"Huang, Ying" <ying.huang@...el.com>,
Hugh Dickins <hughd@...gle.com>,
Matthew Wilcox <willy@...radead.org>,
Kairui Song <kasong@...cent.com>
Subject: [PATCH v2 4/4] swap: avoid holding swap reference in swap_cache_get_folio
From: Kairui Song <kasong@...cent.com>
All its callers either already hold a reference to, or lock the
swap device while calling this function. There is only one exception
in shmem_swapin_folio, just make this caller also hold a reference
of the swap device, so this helper can be simplified and saves
a few cycles.
This also provides finer control of error handling in shmem_swapin_folio,
on race (with swap off), it can just try again. For invalid swap entry,
it can fail with a proper error code.
Signed-off-by: Kairui Song <kasong@...cent.com>
---
mm/shmem.c | 11 +++++++++++
mm/swap_state.c | 8 ++------
2 files changed, 13 insertions(+), 6 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index c301487be5fb..5bdf7298d494 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1735,6 +1735,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
struct address_space *mapping = inode->i_mapping;
struct shmem_inode_info *info = SHMEM_I(inode);
struct mm_struct *charge_mm = vma ? vma->vm_mm : NULL;
+ struct swap_info_struct *si;
struct folio *folio = NULL;
swp_entry_t swap;
int error;
@@ -1746,6 +1747,14 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
if (is_swapin_error_entry(swap))
return -EIO;
+ si = get_swap_device(swap);
+ if (!si) {
+ if (!shmem_confirm_swap(mapping, index, swap))
+ return -EEXIST;
+ else
+ return -EINVAL;
+ }
+
/* Look it up and read it in.. */
folio = swap_cache_get_folio(swap, NULL, 0);
if (!folio) {
@@ -1806,6 +1815,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
delete_from_swap_cache(folio);
folio_mark_dirty(folio);
swap_free(swap);
+ put_swap_device(si);
*foliop = folio;
return 0;
@@ -1819,6 +1829,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
folio_unlock(folio);
folio_put(folio);
}
+ put_swap_device(si);
return error;
}
diff --git a/mm/swap_state.c b/mm/swap_state.c
index d8d171195a3a..cb9aaa00951d 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -321,19 +321,15 @@ static inline bool swap_use_vma_readahead(void)
* unlocked and with its refcount incremented - we rely on the kernel
* lock getting page table operations atomic even if we drop the folio
* lock before returning.
+ *
+ * Caller must lock the swap device or hold a reference to keep it valid.
*/
struct folio *swap_cache_get_folio(swp_entry_t entry,
struct vm_area_struct *vma, unsigned long addr)
{
struct folio *folio;
- struct swap_info_struct *si;
- si = get_swap_device(entry);
- if (!si)
- return NULL;
folio = filemap_get_folio(swap_address_space(entry), swp_offset(entry));
- put_swap_device(si);
-
if (folio) {
bool vma_ra = swap_use_vma_readahead();
bool readahead;
--
2.35.2
Powered by blists - more mailing lists