[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230726073409.631838-1-liubo254@huawei.com>
Date: Wed, 26 Jul 2023 15:34:09 +0800
From: liubo <liubo254@...wei.com>
To: <akpm@...ux-foundation.org>
CC: <liubo254@...wei.com>, <linux-fsdevel@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <hughd@...gle.com>,
<peterx@...hat.com>, <willy@...radead.org>
Subject: [PATCH] smaps: Fix the abnormal memory statistics obtained through /proc/pid/smaps
In commit 474098edac26 ("mm/gup: replace FOLL_NUMA by
gup_can_follow_protnone()"), FOLL_NUMA was removed and replaced by
the gup_can_follow_protnone interface.
However, for the case where the user-mode process uses transparent
huge pages, when analyzing the memory usage through
/proc/pid/smaps_rollup, the obtained memory usage is not consistent
with the RSS in /proc/pid/status.
Related examples are as follows:
cat /proc/15427/status
VmRSS: 20973024 kB
RssAnon: 20971616 kB
RssFile: 1408 kB
RssShmem: 0 kB
cat /proc/15427/smaps_rollup
00400000-7ffcc372d000 ---p 00000000 00:00 0 [rollup]
Rss: 14419432 kB
Pss: 14418079 kB
Pss_Dirty: 14418016 kB
Pss_Anon: 14418016 kB
Pss_File: 63 kB
Pss_Shmem: 0 kB
Anonymous: 14418016 kB
LazyFree: 0 kB
AnonHugePages: 14417920 kB
The root cause is that the traversal In the page table, the number of
pages obtained by smaps_pmd_entry does not include the pages
corresponding to PROTNONE,resulting in a different situation.
Therefore, when obtaining pages through the follow_trans_huge_pmd
interface, add the FOLL_FORCE flag to count the pages corresponding to
PROTNONE to solve the above problem.
Signed-off-by: liubo <liubo254@...wei.com>
Fixes: 474098edac26 ("mm/gup: replace FOLL_NUMA by gup_can_follow_protnone()")
---
fs/proc/task_mmu.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index c1e6531cb02a..ed08f9b869e2 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -571,8 +571,10 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr,
bool migration = false;
if (pmd_present(*pmd)) {
- /* FOLL_DUMP will return -EFAULT on huge zero page */
- page = follow_trans_huge_pmd(vma, addr, pmd, FOLL_DUMP);
+ /* FOLL_DUMP will return -EFAULT on huge zero page
+ * FOLL_FORCE follow a PROT_NONE mapped page
+ */
+ page = follow_trans_huge_pmd(vma, addr, pmd, FOLL_DUMP | FOLL_FORCE);
} else if (unlikely(thp_migration_supported() && is_swap_pmd(*pmd))) {
swp_entry_t entry = pmd_to_swp_entry(*pmd);
--
2.27.0
Powered by blists - more mailing lists