lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 20 Nov 2023 03:47:24 +0800
From:   Kairui Song <ryncsn@...il.com>
To:     linux-mm@...ck.org
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        "Huang, Ying" <ying.huang@...el.com>,
        David Hildenbrand <david@...hat.com>,
        Hugh Dickins <hughd@...gle.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Matthew Wilcox <willy@...radead.org>,
        Michal Hocko <mhocko@...e.com>, linux-kernel@...r.kernel.org,
        Kairui Song <kasong@...cent.com>
Subject: [PATCH 08/24] mm/swap: check readahead policy per entry

From: Kairui Song <kasong@...cent.com>

Currently VMA readahead is globally disabled when any rotate disk is
used as swap backend. So multiple swap devices are enabled, if a slower
hard disk is set as a low priority fallback, and a high performance SSD
is used and high priority swap device, vma readahead is disabled globally.
The SSD swap device performance will drop by a lot.

Check readahead policy per entry to avoid such problem.

Signed-off-by: Kairui Song <kasong@...cent.com>
---
 mm/swap_state.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/mm/swap_state.c b/mm/swap_state.c
index ff6756f2e8e4..fb78f7f18ed7 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -321,9 +321,9 @@ static inline bool swap_use_no_readahead(struct swap_info_struct *si, swp_entry_
 	return data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1;
 }
 
-static inline bool swap_use_vma_readahead(void)
+static inline bool swap_use_vma_readahead(struct swap_info_struct *si)
 {
-	return READ_ONCE(enable_vma_readahead) && !atomic_read(&nr_rotate_swap);
+	return data_race(si->flags & SWP_SOLIDSTATE) && READ_ONCE(enable_vma_readahead);
 }
 
 /*
@@ -341,7 +341,7 @@ struct folio *swap_cache_get_folio(swp_entry_t entry,
 
 	folio = filemap_get_folio(swap_address_space(entry), swp_offset(entry));
 	if (!IS_ERR(folio)) {
-		bool vma_ra = swap_use_vma_readahead();
+		bool vma_ra = swap_use_vma_readahead(swp_swap_info(entry));
 		bool readahead;
 
 		/*
@@ -920,16 +920,18 @@ static struct page *swapin_no_readahead(swp_entry_t entry, gfp_t gfp_mask,
 struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask,
 			      struct vm_fault *vmf, bool *swapcached)
 {
+	struct swap_info_struct *si;
 	struct mempolicy *mpol;
 	struct page *page;
 	pgoff_t ilx;
 	bool cached;
 
+	si = swp_swap_info(entry);
 	mpol = get_vma_policy(vmf->vma, vmf->address, 0, &ilx);
-	if (swap_use_no_readahead(swp_swap_info(entry), entry)) {
+	if (swap_use_no_readahead(si, entry)) {
 		page = swapin_no_readahead(entry, gfp_mask, mpol, ilx, vmf->vma->vm_mm);
 		cached = false;
-	} else if (swap_use_vma_readahead()) {
+	} else if (swap_use_vma_readahead(si)) {
 		page = swap_vma_readahead(entry, gfp_mask, mpol, ilx, vmf);
 		cached = true;
 	} else {
-- 
2.42.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ