[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID:
<TYAPR01MB4685E67999FE3372EAE2CC4692992@TYAPR01MB4685.jpnprd01.prod.outlook.com>
Date: Tue, 10 Sep 2024 03:56:01 +0800
From: chunpeng lee <chunpeng.lee@...look.com>
To: 21cnbao@...il.com
Cc: akpm@...ux-foundation.org,
baolin.wang@...ux.alibaba.com,
chrisl@...nel.org,
david@...hat.com,
hanchuanhua@...o.com,
ioworker0@...il.com,
kaleshsingh@...gle.com,
kasong@...cent.com,
linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
ryan.roberts@....com,
usamaarif642@...il.com,
v-songbaohua@...o.com,
yuanshuai@...o.com,
ziy@...dia.com
Subject: Re: [PATCH v4 1/2] mm: count the number of anonymous THPs per size
> Let's track for each anonymous THP size, how many of them are currently
> allocated. We'll track the complete lifespan of an anon THP, starting
> when it becomes an anon THP ("large anon folio") (->mapping gets set),
> until it gets freed (->mapping gets cleared).
IIUC, If an anon THP is swapped out as a whole, it is still being counted, correct?
> Note that AnonPages in /proc/meminfo currently tracks the total number
> of *mapped* anonymous *pages*, and therefore has slightly different
> semantics. In the future, we might also want to track "nr_anon_mapped"
> for each THP size, which might be helpful when comparing it to the
> number of allocated anon THPs (long-term pinning, stuck in swapcache,
> memory leaks, ...).
If we do not consider tracking each THP size, Can we expand the NR_ANON_THPS
statistic to include pte-mapped thp as well?
---
mm/memcontrol-v1.c | 2 +-
mm/rmap.c | 5 +++++
2 files changed, 6 insertions(+), 1 deletion(-)
diff --git a/mm/memcontrol-v1.c b/mm/memcontrol-v1.c
index 44803cbea38a..3e44175db81f 100644
--- a/mm/memcontrol-v1.c
+++ b/mm/memcontrol-v1.c
@@ -786,7 +786,7 @@ static int mem_cgroup_move_account(struct folio *folio,
if (folio_mapped(folio)) {
__mod_lruvec_state(from_vec, NR_ANON_MAPPED, -nr_pages);
__mod_lruvec_state(to_vec, NR_ANON_MAPPED, nr_pages);
- if (folio_test_pmd_mappable(folio)) {
+ if (folio_test_large(folio)) {
__mod_lruvec_state(from_vec, NR_ANON_THPS,
-nr_pages);
__mod_lruvec_state(to_vec, NR_ANON_THPS,
diff --git a/mm/rmap.c b/mm/rmap.c
index a8797d1b3d49..97eb25d023ba 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1291,6 +1291,11 @@ static void __folio_mod_stat(struct folio *folio, int nr, int nr_pmdmapped)
if (nr) {
idx = folio_test_anon(folio) ? NR_ANON_MAPPED : NR_FILE_MAPPED;
__lruvec_stat_mod_folio(folio, idx, nr);
+
+ if (folio_test_anon(folio) &&
+ folio_test_large(folio) &&
+ nr == 1 << folio_order(folio))
+ __lruvec_stat_mod_folio(folio, NR_ANON_THPS, nr);
}
if (nr_pmdmapped) {
if (folio_test_anon(folio)) {
--
Powered by blists - more mailing lists