[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240215215907.20121-8-osalvador@suse.de>
Date: Thu, 15 Feb 2024 22:59:07 +0100
From: Oscar Salvador <osalvador@...e.de>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
Michal Hocko <mhocko@...e.com>,
Vlastimil Babka <vbabka@...e.cz>,
Marco Elver <elver@...gle.com>,
Andrey Konovalov <andreyknvl@...il.com>,
Alexander Potapenko <glider@...gle.com>,
Oscar Salvador <osalvador@...e.de>
Subject: [PATCH v10 7/7] mm,page_owner: Update Documentation regarding page_owner_stacks
Update page_owner documentation including the new page_owner_stacks
feature to show how it can be used.
Signed-off-by: Oscar Salvador <osalvador@...e.de>
Reviewed-by: Vlastimil Babka <vbabka@...e.cz>
Reviewed-by: Marco Elver <elver@...gle.com>
---
Documentation/mm/page_owner.rst | 45 +++++++++++++++++++++++++++++++++
1 file changed, 45 insertions(+)
diff --git a/Documentation/mm/page_owner.rst b/Documentation/mm/page_owner.rst
index 62e3f7ab23cc..0d0334cd5179 100644
--- a/Documentation/mm/page_owner.rst
+++ b/Documentation/mm/page_owner.rst
@@ -24,6 +24,11 @@ fragmentation statistics can be obtained through gfp flag information of
each page. It is already implemented and activated if page owner is
enabled. Other usages are more than welcome.
+It can also be used to show all the stacks and their outstanding
+allocations, which gives us a quick overview of where the memory is going
+without the need to screen through all the pages and match the allocation
+and free operation.
+
page owner is disabled by default. So, if you'd like to use it, you need
to add "page_owner=on" to your boot cmdline. If the kernel is built
with page owner and page owner is disabled in runtime due to not enabling
@@ -68,6 +73,46 @@ Usage
4) Analyze information from page owner::
+ cat /sys/kernel/debug/page_owner_stacks/show_stacks > stacks.txt
+ cat stacks.txt
+ prep_new_page+0xa9/0x120
+ get_page_from_freelist+0x7e6/0x2140
+ __alloc_pages+0x18a/0x370
+ new_slab+0xc8/0x580
+ ___slab_alloc+0x1f2/0xaf0
+ __slab_alloc.isra.86+0x22/0x40
+ kmem_cache_alloc+0x31b/0x350
+ __khugepaged_enter+0x39/0x100
+ dup_mmap+0x1c7/0x5ce
+ copy_process+0x1afe/0x1c90
+ kernel_clone+0x9a/0x3c0
+ __do_sys_clone+0x66/0x90
+ do_syscall_64+0x7f/0x160
+ entry_SYSCALL_64_after_hwframe+0x6c/0x74
+ stack_count: 234
+ ...
+ ...
+ echo 7000 > /sys/kernel/debug/page_owner_stacks/count_threshold
+ cat /sys/kernel/debug/page_owner_stacks/show_stacks> stacks_7000.txt
+ cat stacks_7000.txt
+ prep_new_page+0xa9/0x120
+ get_page_from_freelist+0x7e6/0x2140
+ __alloc_pages+0x18a/0x370
+ alloc_pages_mpol+0xdf/0x1e0
+ folio_alloc+0x14/0x50
+ filemap_alloc_folio+0xb0/0x100
+ page_cache_ra_unbounded+0x97/0x180
+ filemap_fault+0x4b4/0x1200
+ __do_fault+0x2d/0x110
+ do_pte_missing+0x4b0/0xa30
+ __handle_mm_fault+0x7fa/0xb70
+ handle_mm_fault+0x125/0x300
+ do_user_addr_fault+0x3c9/0x840
+ exc_page_fault+0x68/0x150
+ asm_exc_page_fault+0x22/0x30
+ stack_count: 8248
+ ...
+
cat /sys/kernel/debug/page_owner > page_owner_full.txt
./page_owner_sort page_owner_full.txt sorted_page_owner.txt
--
2.43.0
Powered by blists - more mailing lists