[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20241018064805.336490-8-kanchana.p.sridhar@intel.com>
Date: Thu, 17 Oct 2024 23:48:05 -0700
From: Kanchana P Sridhar <kanchana.p.sridhar@...el.com>
To: linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
hannes@...xchg.org,
yosryahmed@...gle.com,
nphamcs@...il.com,
chengming.zhou@...ux.dev,
usamaarif642@...il.com,
ryan.roberts@....com,
ying.huang@...el.com,
21cnbao@...il.com,
akpm@...ux-foundation.org,
hughd@...gle.com,
willy@...radead.org,
bfoster@...hat.com,
dchinner@...hat.com,
chrisl@...nel.org,
david@...hat.com
Cc: wajdi.k.feghali@...el.com,
vinodh.gopal@...el.com,
kanchana.p.sridhar@...el.com
Subject: [RFC PATCH v1 7/7] mm: For IAA batching, reduce SWAP_BATCH and modify swap slot cache thresholds.
When IAA is used for compress batching and decompress batching of folios,
we significantly reduce the swapout-swapin path latencies, such
that swap page-faults' latencies are reduced. This means swap entries will
need to be freed more often, and swap slots will have to be released more
often.
The existing SWAP_BATCH and SWAP_SLOTS_CACHE_SIZE value of 64
can cause lock contention of the swap_info_struct lock in
swapcache_free_entries and cpu hardlockups can result in highly contended
server scenarios.
To prevent this, the SWAP_BATCH and SWAP_SLOTS_CACHE_SIZE
has been reduced to 16 if IAA is used for compress/decompress batching. The
swap_slots_cache activate/deactive thresholds have been modified
accordingly, so that we don't compromise performance for stability.
Signed-off-by: Kanchana P Sridhar <kanchana.p.sridhar@...el.com>
---
include/linux/swap.h | 7 +++++++
include/linux/swap_slots.h | 7 +++++++
2 files changed, 14 insertions(+)
diff --git a/include/linux/swap.h b/include/linux/swap.h
index ca533b478c21..3987faa0a2ff 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -13,6 +13,7 @@
#include <linux/pagemap.h>
#include <linux/atomic.h>
#include <linux/page-flags.h>
+#include <linux/pagevec.h>
#include <uapi/linux/mempolicy.h>
#include <asm/page.h>
@@ -32,7 +33,13 @@ struct pagevec;
#define SWAP_FLAGS_VALID (SWAP_FLAG_PRIO_MASK | SWAP_FLAG_PREFER | \
SWAP_FLAG_DISCARD | SWAP_FLAG_DISCARD_ONCE | \
SWAP_FLAG_DISCARD_PAGES)
+
+#if defined(CONFIG_ZSWAP_STORE_BATCHING_ENABLED) || \
+ defined(CONFIG_ZSWAP_LOAD_BATCHING_ENABLED)
+#define SWAP_BATCH 16
+#else
#define SWAP_BATCH 64
+#endif
static inline int current_is_kswapd(void)
{
diff --git a/include/linux/swap_slots.h b/include/linux/swap_slots.h
index 15adfb8c813a..1b6e4e2798bd 100644
--- a/include/linux/swap_slots.h
+++ b/include/linux/swap_slots.h
@@ -7,8 +7,15 @@
#include <linux/mutex.h>
#define SWAP_SLOTS_CACHE_SIZE SWAP_BATCH
+
+#if defined(CONFIG_ZSWAP_STORE_BATCHING_ENABLED) || \
+ defined(CONFIG_ZSWAP_LOAD_BATCHING_ENABLED)
+#define THRESHOLD_ACTIVATE_SWAP_SLOTS_CACHE (40*SWAP_SLOTS_CACHE_SIZE)
+#define THRESHOLD_DEACTIVATE_SWAP_SLOTS_CACHE (16*SWAP_SLOTS_CACHE_SIZE)
+#else
#define THRESHOLD_ACTIVATE_SWAP_SLOTS_CACHE (5*SWAP_SLOTS_CACHE_SIZE)
#define THRESHOLD_DEACTIVATE_SWAP_SLOTS_CACHE (2*SWAP_SLOTS_CACHE_SIZE)
+#endif
struct swap_slots_cache {
bool lock_initialized;
--
2.27.0
Powered by blists - more mailing lists