[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230911160824.GB103342@cmpxchg.org>
Date: Mon, 11 Sep 2023 12:08:24 -0400
From: Johannes Weiner <hannes@...xchg.org>
To: Xin Hao <haoxing990@...il.com>
Cc: mhocko@...nel.org, roman.gushchin@...ux.dev, shakeelb@...gle.com,
akpm@...ux-foundation.org, cgroups@...r.kernel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: memcg: add THP swap out info for anonymous reclaim
On Sat, Sep 09, 2023 at 11:52:41PM +0800, Xin Hao wrote:
> At present, we support per-memcg reclaim strategy, however we do not
> know the number of transparent huge pages being reclaimed, as we know
> the transparent huge pages need to be splited before reclaim them, and
> they will bring some performance bottleneck effect. for example, when
> two memcg (A & B) are doing reclaim for anonymous pages at same time,
> and 'A' memcg is reclaiming a large number of transparent huge pages, we
> can better analyze that the performance bottleneck will be caused by 'A'
> memcg. therefore, in order to better analyze such problems, there add
> THP swap out info for per-memcg.
>
> Signed-off-by: Xin Hao <vernhao@...cent.com>
That sounds reasonable. A few comments below:
> @@ -4131,6 +4133,10 @@ static const unsigned int memcg1_events[] = {
> PGPGOUT,
> PGFAULT,
> PGMAJFAULT,
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> + THP_SWPOUT,
> + THP_SWPOUT_FALLBACK,
> +#endif
> };
Cgroup1 is maintenance-only, please drop this hunk.
> static void memcg1_stat_format(struct mem_cgroup *memcg, struct seq_buf *s)
> diff --git a/mm/page_io.c b/mm/page_io.c
> index fe4c21af23f2..008ada2e024a 100644
> --- a/mm/page_io.c
> +++ b/mm/page_io.c
> @@ -208,8 +208,10 @@ int swap_writepage(struct page *page, struct writeback_control *wbc)
> static inline void count_swpout_vm_event(struct folio *folio)
> {
> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> - if (unlikely(folio_test_pmd_mappable(folio)))
> + if (unlikely(folio_test_pmd_mappable(folio))) {
> + count_memcg_events(folio_memcg(folio), THP_SWPOUT, 1);
count_memcg_folio_events()
> count_vm_event(THP_SWPOUT);
> + }
> #endif
> count_vm_events(PSWPOUT, folio_nr_pages(folio));
> }
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index ea57a43ebd6b..29a82b72345a 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1928,6 +1928,8 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
> folio_list))
> goto activate_locked;
> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> + count_memcg_events(folio_memcg(folio),
> + THP_SWPOUT_FALLBACK, 1);
count_memcg_folio_events()
Powered by blists - more mailing lists