[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGsJ_4x9MB2yrs2zbZz3TpAjAzD-jzbmHY6+nGEy-uOE4y9jFw@mail.gmail.com>
Date: Fri, 9 Aug 2024 16:40:47 +0800
From: Barry Song <21cnbao@...il.com>
To: Ryan Roberts <ryan.roberts@....com>
Cc: akpm@...ux-foundation.org, linux-mm@...ck.org, chrisl@...nel.org,
david@...hat.com, kaleshsingh@...gle.com, kasong@...cent.com,
linux-kernel@...r.kernel.org, ioworker0@...il.com,
baolin.wang@...ux.alibaba.com, ziy@...dia.com, hanchuanhua@...o.com,
Barry Song <v-songbaohua@...o.com>
Subject: Re: [PATCH RFC 1/2] mm: collect the number of anon large folios
On Fri, Aug 9, 2024 at 4:27 PM Ryan Roberts <ryan.roberts@....com> wrote:
>
> On 09/08/2024 09:13, Ryan Roberts wrote:
> > On 08/08/2024 02:04, Barry Song wrote:
> >> From: Barry Song <v-songbaohua@...o.com>
> >>
> >> When a new anonymous mTHP is added to the rmap, we increase the count.
> >> We reduce the count whenever an mTHP is completely unmapped.
> >>
> >> Signed-off-by: Barry Song <v-songbaohua@...o.com>
> >> ---
> >> Documentation/admin-guide/mm/transhuge.rst | 5 +++++
> >> include/linux/huge_mm.h | 15 +++++++++++++--
> >> mm/huge_memory.c | 2 ++
> >> mm/rmap.c | 3 +++
> >> 4 files changed, 23 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst
> >> index 058485daf186..715f181543f6 100644
> >> --- a/Documentation/admin-guide/mm/transhuge.rst
> >> +++ b/Documentation/admin-guide/mm/transhuge.rst
> >> @@ -527,6 +527,11 @@ split_deferred
> >> it would free up some memory. Pages on split queue are going to
> >> be split under memory pressure, if splitting is possible.
> >>
> >> +anon_num
> >> + the number of anon huge pages we have in the whole system.
> >> + These huge pages could be still entirely mapped and have partially
> >> + unmapped and unused subpages.
> >
> > nit: "entirely mapped and have partially unmapped and unused subpages" ->
> > "entirely mapped or have partially unmapped/unused subpages"
> >
> >> +
> >> As the system ages, allocating huge pages may be expensive as the
> >> system uses memory compaction to copy data around memory to free a
> >> huge page for use. There are some counters in ``/proc/vmstat`` to help
> >> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> >> index e25d9ebfdf89..294c348fe3cc 100644
> >> --- a/include/linux/huge_mm.h
> >> +++ b/include/linux/huge_mm.h
> >> @@ -281,6 +281,7 @@ enum mthp_stat_item {
> >> MTHP_STAT_SPLIT,
> >> MTHP_STAT_SPLIT_FAILED,
> >> MTHP_STAT_SPLIT_DEFERRED,
> >> + MTHP_STAT_NR_ANON,
> >> __MTHP_STAT_COUNT
> >> };
> >>
> >> @@ -291,14 +292,24 @@ struct mthp_stat {
> >> #ifdef CONFIG_SYSFS
> >> DECLARE_PER_CPU(struct mthp_stat, mthp_stats);
> >>
> >> -static inline void count_mthp_stat(int order, enum mthp_stat_item item)
> >> +static inline void mod_mthp_stat(int order, enum mthp_stat_item item, int delta)
> >> {
> >> if (order <= 0 || order > PMD_ORDER)
> >> return;
> >>
> >> - this_cpu_inc(mthp_stats.stats[order][item]);
> >> + this_cpu_add(mthp_stats.stats[order][item], delta);
> >> +}
> >> +
> >> +static inline void count_mthp_stat(int order, enum mthp_stat_item item)
> >> +{
> >> + mod_mthp_stat(order, item, 1);
> >> }
> >> +
> >> #else
> >> +static inline void mod_mthp_stat(int order, enum mthp_stat_item item, int delta)
> >> +{
> >> +}
> >> +
> >> static inline void count_mthp_stat(int order, enum mthp_stat_item item)
> >> {
> >> }
> >> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> >> index 697fcf89f975..b6bc2a3791e3 100644
> >> --- a/mm/huge_memory.c
> >> +++ b/mm/huge_memory.c
> >> @@ -578,6 +578,7 @@ DEFINE_MTHP_STAT_ATTR(shmem_fallback_charge, MTHP_STAT_SHMEM_FALLBACK_CHARGE);
> >> DEFINE_MTHP_STAT_ATTR(split, MTHP_STAT_SPLIT);
> >> DEFINE_MTHP_STAT_ATTR(split_failed, MTHP_STAT_SPLIT_FAILED);
> >> DEFINE_MTHP_STAT_ATTR(split_deferred, MTHP_STAT_SPLIT_DEFERRED);
> >> +DEFINE_MTHP_STAT_ATTR(anon_num, MTHP_STAT_NR_ANON);
>
> Why are the user-facing and internal names different? Perhaps it would be
> clearer to call this nr_anon in sysfs?
>
> >>
> >> static struct attribute *stats_attrs[] = {
> >> &anon_fault_alloc_attr.attr,
> >> @@ -591,6 +592,7 @@ static struct attribute *stats_attrs[] = {
> >> &split_attr.attr,
> >> &split_failed_attr.attr,
> >> &split_deferred_attr.attr,
> >> + &anon_num_attr.attr,
> >> NULL,
> >> };
> >>
> >> diff --git a/mm/rmap.c b/mm/rmap.c
> >> index 901950200957..2b722f26224c 100644
> >> --- a/mm/rmap.c
> >> +++ b/mm/rmap.c
> >> @@ -1467,6 +1467,7 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
> >> }
> >>
> >> __folio_mod_stat(folio, nr, nr_pmdmapped);
> >> + mod_mthp_stat(folio_order(folio), MTHP_STAT_NR_ANON, 1);
> >> }
> >>
> >> static __always_inline void __folio_add_file_rmap(struct folio *folio,
> >> @@ -1582,6 +1583,8 @@ static __always_inline void __folio_remove_rmap(struct folio *folio,
> >> list_empty(&folio->_deferred_list))
> >> deferred_split_folio(folio);
> >> __folio_mod_stat(folio, -nr, -nr_pmdmapped);
> >> + if (folio_test_anon(folio) && !atomic_read(mapped))
> >
> > Agree that atomic_read() is dodgy here.
> >
> > Not sure I fully understand why David prefers to do the unaccounting at
> > free-time though? It feels unbalanced to me to increment when first mapped but
> > decrement when freed. Surely its safer to either use alloc/free or use first
> > map/last map?
As long as we can account for mTHP when clearing the Anon flag for the folio,
we should be safe. It’s challenging to add +1 when allocating a large folio
because we don’t know its intended use—it could be for file, anon, or shmem.
> >
> > If using alloc/free isn't there a THP constructor/destructor that prepares the
> > deferred list? (My memory may be failing me). Could we use that?
>
> Additionally, if we wanted to extend (eventually) to track the number of shmem
> and file mthps in additional counters, could we also account using similar folio
> free-time hooks? If not, it might be an argument to account in rmap_unmap to be
> consistent for all?
I've been struggling quite a bit with rmap. Despite trying various
approaches, I’m
still occasionally seeing a negative mTHP counter after running it some hours
on phones. It seems that rmap is really tricky to handle. I admit that I have
failed on rmap :-)
On the other hand, for anon folios, we have cases where they are split from
order M to order N. So, we add +1 when a new anon folio is added to rmap
and subtract -1 when we either split it or free it. This approach seems clearer
to me. When we split from order M to order N, we can add 1 << (M - N) for
order N.
>
>
> >
> >> + mod_mthp_stat(folio_order(folio), MTHP_STAT_NR_ANON, -1);
> >>
> >> /*
> >> * It would be tidy to reset folio_test_anon mapping when fully
Thanks
Barry
Powered by blists - more mailing lists