[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20241105211934.5083-1-21cnbao@gmail.com>
Date: Wed, 6 Nov 2024 10:19:34 +1300
From: Barry Song <21cnbao@...il.com>
To: akpm@...ux-foundation.org,
linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org,
Barry Song <v-songbaohua@...o.com>,
Nhat Pham <nphamcs@...il.com>,
Usama Arif <usamaarif642@...il.com>,
Chengming Zhou <chengming.zhou@...ux.dev>,
Yosry Ahmed <yosryahmed@...gle.com>,
Hailong Liu <hailong.liu@...o.com>,
Johannes Weiner <hannes@...xchg.org>,
David Hildenbrand <david@...hat.com>,
Hugh Dickins <hughd@...gle.com>,
Matthew Wilcox <willy@...radead.org>,
Shakeel Butt <shakeel.butt@...ux.dev>,
Andi Kleen <ak@...ux.intel.com>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
Chris Li <chrisl@...nel.org>,
"Huang, Ying" <ying.huang@...el.com>,
Kairui Song <kasong@...cent.com>,
Ryan Roberts <ryan.roberts@....com>
Subject: [PATCH v3] mm: count zeromap read and set for swapout and swapin
From: Barry Song <v-songbaohua@...o.com>
When the proportion of folios from the zeromap is small, missing their
accounting may not significantly impact profiling. However, it’s easy
to construct a scenario where this becomes an issue—for example,
allocating 1 GB of memory, writing zeros from userspace, followed by
MADV_PAGEOUT, and then swapping it back in. In this case, the swap-out
and swap-in counts seem to vanish into a black hole, potentially
causing semantic ambiguity.
On the other hand, Usama reported that zero-filled pages can exceed 10% in
workloads utilizing zswap, while Hailong noted that some app in Android
have more than 6% zero-filled pages. Before commit 0ca0c24e3211 ("mm: store
zero pages to be swapped out in a bitmap"), both zswap and zRAM implemented
similar optimizations, leading to these optimized-out pages being counted
in either zswap or zRAM counters (with pswpin/pswpout also increasing for
zRAM). With zeromap functioning prior to both zswap and zRAM, userspace
will no longer detect these swap-out and swap-in actions.
We have three ways to address this:
1. Introduce a dedicated counter specifically for the zeromap.
2. Use pswpin/pswpout accounting, treating the zero map as a standard
backend. This approach aligns with zRAM's current handling of
same-page fills at the device level. However, it would mean losing
the optimized-out page counters previously available in zRAM and
would not align with systems using zswap. Additionally, as noted by
Nhat Pham, pswpin/pswpout counters apply only to I/O done directly
to the backend device.
3. Count zeromap pages under zswap, aligning with system behavior when
zswap is enabled. However, this would not be consistent with zRAM,
nor would it align with systems lacking both zswap and zRAM.
Given the complications with options 2 and 3, this patch selects
option 1.
We can find these counters from /proc/vmstat (counters for the whole
system) and memcg's memory.stat (counters for the interested memcg).
For example:
$ grep -E 'swpin_zero|swpout_zero' /proc/vmstat
swpin_zero 1648
swpout_zero 33536
$ grep -E 'swpin_zero|swpout_zero' /sys/fs/cgroup/system.slice/memory.stat
swpin_zero 3905
swpout_zero 3985
This patch does not address any specific zeromap bug, but the missing
swpout and swpin counts for zero-filled pages can be highly confusing
and may mislead user-space agents that rely on changes in these counters
as indicators. Therefore, we add a Fixes tag to encourage the inclusion
of this counter in any kernel versions with zeromap.
Fixes: 0ca0c24e3211 ("mm: store zero pages to be swapped out in a bitmap")
Reviewed-by: Nhat Pham <nphamcs@...il.com>
Cc: Usama Arif <usamaarif642@...il.com>
Cc: Chengming Zhou <chengming.zhou@...ux.dev>
Cc: Yosry Ahmed <yosryahmed@...gle.com>
Cc: Hailong Liu <hailong.liu@...o.com>
Cc: Johannes Weiner <hannes@...xchg.org>
Cc: David Hildenbrand <david@...hat.com>
Cc: Hugh Dickins <hughd@...gle.com>
Cc: Matthew Wilcox (Oracle) <willy@...radead.org>
Cc: Shakeel Butt <shakeel.butt@...ux.dev>
Cc: Andi Kleen <ak@...ux.intel.com>
Cc: Baolin Wang <baolin.wang@...ux.alibaba.com>
Cc: Chris Li <chrisl@...nel.org>
Cc: "Huang, Ying" <ying.huang@...el.com>
Cc: Kairui Song <kasong@...cent.com>
Cc: Ryan Roberts <ryan.roberts@....com>
Signed-off-by: Barry Song <v-songbaohua@...o.com>
---
-v3:
* collected Nhat's reviewed-by, thanks!
* refine doc per Usama and David, thanks!
* refine changelog
Documentation/admin-guide/cgroup-v2.rst | 9 +++++++++
include/linux/vm_event_item.h | 2 ++
mm/memcontrol.c | 4 ++++
mm/page_io.c | 16 ++++++++++++++++
mm/vmstat.c | 2 ++
5 files changed, 33 insertions(+)
diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
index db3799f1483e..13736a94edfd 100644
--- a/Documentation/admin-guide/cgroup-v2.rst
+++ b/Documentation/admin-guide/cgroup-v2.rst
@@ -1599,6 +1599,15 @@ The following nested keys are defined.
pglazyfreed (npn)
Amount of reclaimed lazyfree pages
+ swpin_zero
+ Number of pages swapped into memory and filled with zero, where I/O
+ was optimized out because the page content was detected to be zero
+ during swapout.
+
+ swpout_zero
+ Number of zero-filled pages swapped out with I/O skipped due to the
+ content being detected as zero.
+
zswpin
Number of pages moved in to memory from zswap.
diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
index aed952d04132..f70d0958095c 100644
--- a/include/linux/vm_event_item.h
+++ b/include/linux/vm_event_item.h
@@ -134,6 +134,8 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
#ifdef CONFIG_SWAP
SWAP_RA,
SWAP_RA_HIT,
+ SWPIN_ZERO,
+ SWPOUT_ZERO,
#ifdef CONFIG_KSM
KSM_SWPIN_COPY,
#endif
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 5e44d6e7591e..7b3503d12aaf 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -441,6 +441,10 @@ static const unsigned int memcg_vm_event_stat[] = {
PGDEACTIVATE,
PGLAZYFREE,
PGLAZYFREED,
+#ifdef CONFIG_SWAP
+ SWPIN_ZERO,
+ SWPOUT_ZERO,
+#endif
#ifdef CONFIG_ZSWAP
ZSWPIN,
ZSWPOUT,
diff --git a/mm/page_io.c b/mm/page_io.c
index 5d9b6e6cf96c..4b4ea8e49cf6 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -204,7 +204,9 @@ static bool is_folio_zero_filled(struct folio *folio)
static void swap_zeromap_folio_set(struct folio *folio)
{
+ struct obj_cgroup *objcg = get_obj_cgroup_from_folio(folio);
struct swap_info_struct *sis = swp_swap_info(folio->swap);
+ int nr_pages = folio_nr_pages(folio);
swp_entry_t entry;
unsigned int i;
@@ -212,6 +214,12 @@ static void swap_zeromap_folio_set(struct folio *folio)
entry = page_swap_entry(folio_page(folio, i));
set_bit(swp_offset(entry), sis->zeromap);
}
+
+ count_vm_events(SWPOUT_ZERO, nr_pages);
+ if (objcg) {
+ count_objcg_events(objcg, SWPOUT_ZERO, nr_pages);
+ obj_cgroup_put(objcg);
+ }
}
static void swap_zeromap_folio_clear(struct folio *folio)
@@ -507,6 +515,7 @@ static void sio_read_complete(struct kiocb *iocb, long ret)
static bool swap_read_folio_zeromap(struct folio *folio)
{
int nr_pages = folio_nr_pages(folio);
+ struct obj_cgroup *objcg;
bool is_zeromap;
/*
@@ -521,6 +530,13 @@ static bool swap_read_folio_zeromap(struct folio *folio)
if (!is_zeromap)
return false;
+ objcg = get_obj_cgroup_from_folio(folio);
+ count_vm_events(SWPIN_ZERO, nr_pages);
+ if (objcg) {
+ count_objcg_events(objcg, SWPIN_ZERO, nr_pages);
+ obj_cgroup_put(objcg);
+ }
+
folio_zero_range(folio, 0, folio_size(folio));
folio_mark_uptodate(folio);
return true;
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 22a294556b58..c8ef7352f9ed 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1418,6 +1418,8 @@ const char * const vmstat_text[] = {
#ifdef CONFIG_SWAP
"swap_ra",
"swap_ra_hit",
+ "swpin_zero",
+ "swpout_zero",
#ifdef CONFIG_KSM
"ksm_swpin_copy",
#endif
--
2.39.3 (Apple Git-146)
Powered by blists - more mailing lists