[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20241121162735.9558-1-haowenchao22@gmail.com>
Date: Fri, 22 Nov 2024 00:27:35 +0800
From: Wenchao Hao <haowenchao22@...il.com>
To: Jonathan Corbet <corbet@....net>,
Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...hat.com>,
Barry Song <baohua@...nel.org>,
Ryan Roberts <ryan.roberts@....com>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
Usama Arif <usamaarif642@...il.com>,
Lance Yang <ioworker0@...il.com>,
Matthew Wilcox <willy@...radead.org>,
Peter Xu <peterx@...hat.com>,
linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Cc: Wenchao Hao <haowenchao22@...il.com>,
Chuanhua Han <hanchuanhua@...o.com>
Subject: [PATCH] mm: add per-order mTHP swap-in fallback counters
Now large folio swap-in is supported, but we do not have a method
to analyze the success ratio of large folio swap-ins. Similar to
anon_fault_fallback, we add a per-order mTHP swpin_fallback to help
calculate the success ratio. The new counter is located at:
/sys/kernel/mm/transparent_hugepage/hugepages-<size>/stats/swpin_fallback
Signed-off-by: Wenchao Hao <haowenchao22@...il.com>
CC: Chuanhua Han <hanchuanhua@...o.com>
---
Documentation/admin-guide/mm/transhuge.rst | 5 +++++
include/linux/huge_mm.h | 1 +
mm/huge_memory.c | 3 +++
mm/memory.c | 1 +
4 files changed, 10 insertions(+)
diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst
index 5034915f4e8e..f5c775457913 100644
--- a/Documentation/admin-guide/mm/transhuge.rst
+++ b/Documentation/admin-guide/mm/transhuge.rst
@@ -561,6 +561,11 @@ swpin
is incremented every time a huge page is swapped in from a non-zswap
swap device in one piece.
+swpin_fallback
+ is incremented if a huge page swapin fails to allocate a huge page
+ and instead falls back to using huge pages with lower orders or
+ small pages.
+
swpout
is incremented every time a huge page is swapped out to a non-zswap
swap device in one piece without splitting.
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index b94c2e8ee918..dcf08f8fdf52 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -121,6 +121,7 @@ enum mthp_stat_item {
MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE,
MTHP_STAT_ZSWPOUT,
MTHP_STAT_SWPIN,
+ MTHP_STAT_SWPIN_FALLBACK,
MTHP_STAT_SWPOUT,
MTHP_STAT_SWPOUT_FALLBACK,
MTHP_STAT_SHMEM_ALLOC,
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index ee335d96fc39..6b089a41acef 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -617,6 +617,7 @@ DEFINE_MTHP_STAT_ATTR(anon_fault_fallback, MTHP_STAT_ANON_FAULT_FALLBACK);
DEFINE_MTHP_STAT_ATTR(anon_fault_fallback_charge, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE);
DEFINE_MTHP_STAT_ATTR(zswpout, MTHP_STAT_ZSWPOUT);
DEFINE_MTHP_STAT_ATTR(swpin, MTHP_STAT_SWPIN);
+DEFINE_MTHP_STAT_ATTR(swpin_fallback, MTHP_STAT_SWPIN_FALLBACK);
DEFINE_MTHP_STAT_ATTR(swpout, MTHP_STAT_SWPOUT);
DEFINE_MTHP_STAT_ATTR(swpout_fallback, MTHP_STAT_SWPOUT_FALLBACK);
#ifdef CONFIG_SHMEM
@@ -637,6 +638,7 @@ static struct attribute *anon_stats_attrs[] = {
#ifndef CONFIG_SHMEM
&zswpout_attr.attr,
&swpin_attr.attr,
+ &swpin_fallback_attr.attr,
&swpout_attr.attr,
&swpout_fallback_attr.attr,
#endif
@@ -669,6 +671,7 @@ static struct attribute *any_stats_attrs[] = {
#ifdef CONFIG_SHMEM
&zswpout_attr.attr,
&swpin_attr.attr,
+ &swpin_fallback_attr.attr,
&swpout_attr.attr,
&swpout_fallback_attr.attr,
#endif
diff --git a/mm/memory.c b/mm/memory.c
index 209885a4134f..7cda8b65e0c9 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4191,6 +4191,7 @@ static struct folio *alloc_swap_folio(struct vm_fault *vmf)
return folio;
folio_put(folio);
}
+ count_mthp_stat(order, MTHP_STAT_SWPIN_FALLBACK);
order = next_order(&orders, order);
}
--
2.45.0
Powered by blists - more mailing lists