[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20241018064805.336490-4-kanchana.p.sridhar@intel.com>
Date: Thu, 17 Oct 2024 23:48:01 -0700
From: Kanchana P Sridhar <kanchana.p.sridhar@...el.com>
To: linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
hannes@...xchg.org,
yosryahmed@...gle.com,
nphamcs@...il.com,
chengming.zhou@...ux.dev,
usamaarif642@...il.com,
ryan.roberts@....com,
ying.huang@...el.com,
21cnbao@...il.com,
akpm@...ux-foundation.org,
hughd@...gle.com,
willy@...radead.org,
bfoster@...hat.com,
dchinner@...hat.com,
chrisl@...nel.org,
david@...hat.com
Cc: wajdi.k.feghali@...el.com,
vinodh.gopal@...el.com,
kanchana.p.sridhar@...el.com
Subject: [RFC PATCH v1 3/7] pagevec: struct folio_batch changes for decompress batching interface.
Made these changes to "struct folio_batch" for use in the
swapin_readahead() based zswap load batching interface for parallel
decompressions with IAA:
1) Moved SWAP_RA_ORDER_CEILING definition to pagevec.h.
2) Increased PAGEVEC_SIZE to (1UL << SWAP_RA_ORDER_CEILING),
because vm.page-cluster=5 requires capacity for 32 folios.
3) Made folio_batch_add() more fail-safe.
Signed-off-by: Kanchana P Sridhar <kanchana.p.sridhar@...el.com>
---
include/linux/pagevec.h | 13 ++++++++++---
mm/swap_state.c | 2 --
2 files changed, 10 insertions(+), 5 deletions(-)
diff --git a/include/linux/pagevec.h b/include/linux/pagevec.h
index 5d3a0cccc6bf..c9bab240fb6e 100644
--- a/include/linux/pagevec.h
+++ b/include/linux/pagevec.h
@@ -11,8 +11,14 @@
#include <linux/types.h>
-/* 31 pointers + header align the folio_batch structure to a power of two */
-#define PAGEVEC_SIZE 31
+/*
+ * For page-cluster of 5, I noticed that space for 31 pointers was
+ * insufficient. Increasing this to meet the requirements for folio_batch
+ * usage in the swap read decompress batching interface that is based on
+ * swapin_readahead().
+ */
+#define SWAP_RA_ORDER_CEILING 5
+#define PAGEVEC_SIZE (1UL << SWAP_RA_ORDER_CEILING)
struct folio;
@@ -74,7 +80,8 @@ static inline unsigned int folio_batch_space(struct folio_batch *fbatch)
static inline unsigned folio_batch_add(struct folio_batch *fbatch,
struct folio *folio)
{
- fbatch->folios[fbatch->nr++] = folio;
+ if (folio_batch_space(fbatch) > 0)
+ fbatch->folios[fbatch->nr++] = folio;
return folio_batch_space(fbatch);
}
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 3cebbff40804..0673593d363c 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -44,8 +44,6 @@ struct address_space *swapper_spaces[MAX_SWAPFILES] __read_mostly;
static unsigned int nr_swapper_spaces[MAX_SWAPFILES] __read_mostly;
static bool enable_vma_readahead __read_mostly = true;
-#define SWAP_RA_ORDER_CEILING 5
-
#define SWAP_RA_WIN_SHIFT (PAGE_SHIFT / 2)
#define SWAP_RA_HITS_MASK ((1UL << SWAP_RA_WIN_SHIFT) - 1)
#define SWAP_RA_HITS_MAX SWAP_RA_HITS_MASK
--
2.27.0
Powered by blists - more mailing lists