[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240704012905.42971-3-ioworker0@gmail.com>
Date: Thu, 4 Jul 2024 09:29:05 +0800
From: Lance Yang <ioworker0@...il.com>
To: akpm@...ux-foundation.org
Cc: dj456119@...il.com,
21cnbao@...il.com,
ryan.roberts@....com,
david@...hat.com,
shy828301@...il.com,
ziy@...dia.com,
libang.li@...group.com,
baolin.wang@...ux.alibaba.com,
linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
Lance Yang <ioworker0@...il.com>,
Barry Song <baohua@...nel.org>,
Mingzhe Yang <mingzhe.yang@...com>
Subject: [PATCH v3 2/2] mm: add docs for per-order mTHP split counters
This commit introduces documentation for mTHP split counters in
transhuge.rst.
Reviewed-by: Barry Song <baohua@...nel.org>
Signed-off-by: Mingzhe Yang <mingzhe.yang@...com>
Signed-off-by: Lance Yang <ioworker0@...il.com>
---
Documentation/admin-guide/mm/transhuge.rst | 20 ++++++++++++++++----
1 file changed, 16 insertions(+), 4 deletions(-)
diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst
index 1f72b00af5d3..0830aa173a8b 100644
--- a/Documentation/admin-guide/mm/transhuge.rst
+++ b/Documentation/admin-guide/mm/transhuge.rst
@@ -369,10 +369,6 @@ also applies to the regions registered in khugepaged.
Monitoring usage
================
-.. note::
- Currently the below counters only record events relating to
- PMD-sized THP. Events relating to other THP sizes are not included.
-
The number of PMD-sized anonymous transparent huge pages currently used by the
system is available by reading the AnonHugePages field in ``/proc/meminfo``.
To identify what applications are using PMD-sized anonymous transparent huge
@@ -514,6 +510,22 @@ file_fallback_charge
falls back to using small pages even though the allocation was
successful.
+split
+ is incremented every time a huge page is successfully split into
+ smaller orders. This can happen for a variety of reasons but a
+ common reason is that a huge page is old and is being reclaimed.
+ This action implies splitting any block mappings into PTEs.
+
+split_failed
+ is incremented if kernel fails to split huge
+ page. This can happen if the page was pinned by somebody.
+
+split_deferred
+ is incremented when a huge page is put onto split
+ queue. This happens when a huge page is partially unmapped and
+ splitting it would free up some memory. Pages on split queue are
+ going to be split under memory pressure.
+
As the system ages, allocating huge pages may be expensive as the
system uses memory compaction to copy data around memory to free a
huge page for use. There are some counters in ``/proc/vmstat`` to help
--
2.45.2
Powered by blists - more mailing lists