[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAK1f24k6n1ngSicrSCv5jX+xa75t-7a3zZB4A95fUvDhteshEQ@mail.gmail.com>
Date: Sat, 23 Nov 2024 10:36:09 +0800
From: Lance Yang <ioworker0@...il.com>
To: Wenchao Hao <haowenchao22@...il.com>
Cc: Jonathan Corbet <corbet@....net>, Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...hat.com>, Barry Song <baohua@...nel.org>,
Ryan Roberts <ryan.roberts@....com>, Baolin Wang <baolin.wang@...ux.alibaba.com>,
Usama Arif <usamaarif642@...il.com>, Matthew Wilcox <willy@...radead.org>,
Peter Xu <peterx@...hat.com>, linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH v2] mm: add per-order mTHP swap-in fallback/fallback_charge
counters
Hi Wenchao,
On Sat, Nov 23, 2024 at 12:14 AM Wenchao Hao <haowenchao22@...il.com> wrote:
>
> Currently, large folio swap-in is supported, but we lack a method to
> analyze their success ratio. Similar to anon_fault_fallback, we introduce
> per-order mTHP swpin_fallback and swpin_fallback_charge counters for
> calculating their success ratio. The new counters are located at:
>
> /sys/kernel/mm/transparent_hugepage/hugepages-<size>/stats/
> swpin_fallback
> swpin_fallback_charge
>
> Signed-off-by: Wenchao Hao <haowenchao22@...il.com>
> ---
> V2:
> Introduce swapin_fallback_charge, which increments if it fails to
> charge a huge page to memory despite successful allocation.
>
> Documentation/admin-guide/mm/transhuge.rst | 10 ++++++++++
> include/linux/huge_mm.h | 2 ++
> mm/huge_memory.c | 6 ++++++
> mm/memory.c | 2 ++
> 4 files changed, 20 insertions(+)
>
> diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst
> index 5034915f4e8e..9c07612281b5 100644
> --- a/Documentation/admin-guide/mm/transhuge.rst
> +++ b/Documentation/admin-guide/mm/transhuge.rst
> @@ -561,6 +561,16 @@ swpin
> is incremented every time a huge page is swapped in from a non-zswap
> swap device in one piece.
>
Would the following be better?
+swpin_fallback
+ is incremented if a huge page swapin fails to allocate or charge
+ it and instead falls back to using small pages.
+swpin_fallback_charge
+ is incremented if a huge page swapin fails to charge it and instead
+ falls back to using small pages even though the allocation was
+ successful.
Thanks,
Lance
> +swpin_fallback
> + is incremented if a huge page swapin fails to allocate or charge
> + a huge page and instead falls back to using huge pages with
> + lower orders or small pages.
> +
> +swpin_fallback_charge
> + is incremented if a page swapin fails to charge a huge page and
> + instead falls back to using huge pages with lower orders or
> + small pages even though the allocation was successful.
> +
> swpout
> is incremented every time a huge page is swapped out to a non-zswap
> swap device in one piece without splitting.
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index b94c2e8ee918..93e509b6c00e 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -121,6 +121,8 @@ enum mthp_stat_item {
> MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE,
> MTHP_STAT_ZSWPOUT,
> MTHP_STAT_SWPIN,
> + MTHP_STAT_SWPIN_FALLBACK,
> + MTHP_STAT_SWPIN_FALLBACK_CHARGE,
> MTHP_STAT_SWPOUT,
> MTHP_STAT_SWPOUT_FALLBACK,
> MTHP_STAT_SHMEM_ALLOC,
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index ee335d96fc39..46749dded1c9 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -617,6 +617,8 @@ DEFINE_MTHP_STAT_ATTR(anon_fault_fallback, MTHP_STAT_ANON_FAULT_FALLBACK);
> DEFINE_MTHP_STAT_ATTR(anon_fault_fallback_charge, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE);
> DEFINE_MTHP_STAT_ATTR(zswpout, MTHP_STAT_ZSWPOUT);
> DEFINE_MTHP_STAT_ATTR(swpin, MTHP_STAT_SWPIN);
> +DEFINE_MTHP_STAT_ATTR(swpin_fallback, MTHP_STAT_SWPIN_FALLBACK);
> +DEFINE_MTHP_STAT_ATTR(swpin_fallback_charge, MTHP_STAT_SWPIN_FALLBACK_CHARGE);
> DEFINE_MTHP_STAT_ATTR(swpout, MTHP_STAT_SWPOUT);
> DEFINE_MTHP_STAT_ATTR(swpout_fallback, MTHP_STAT_SWPOUT_FALLBACK);
> #ifdef CONFIG_SHMEM
> @@ -637,6 +639,8 @@ static struct attribute *anon_stats_attrs[] = {
> #ifndef CONFIG_SHMEM
> &zswpout_attr.attr,
> &swpin_attr.attr,
> + &swpin_fallback_attr.attr,
> + &swpin_fallback_charge_attr.attr,
> &swpout_attr.attr,
> &swpout_fallback_attr.attr,
> #endif
> @@ -669,6 +673,8 @@ static struct attribute *any_stats_attrs[] = {
> #ifdef CONFIG_SHMEM
> &zswpout_attr.attr,
> &swpin_attr.attr,
> + &swpin_fallback_attr.attr,
> + &swpin_fallback_charge_attr.attr,
> &swpout_attr.attr,
> &swpout_fallback_attr.attr,
> #endif
> diff --git a/mm/memory.c b/mm/memory.c
> index 209885a4134f..774dfd309cfe 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -4189,8 +4189,10 @@ static struct folio *alloc_swap_folio(struct vm_fault *vmf)
> if (!mem_cgroup_swapin_charge_folio(folio, vma->vm_mm,
> gfp, entry))
> return folio;
> + count_mthp_stat(order, MTHP_STAT_SWPIN_FALLBACK_CHARGE);
> folio_put(folio);
> }
> + count_mthp_stat(order, MTHP_STAT_SWPIN_FALLBACK);
> order = next_order(&orders, order);
> }
>
> --
> 2.45.0
>
Powered by blists - more mailing lists