[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZV3WnIJMzxT-Zkt4@tiehlicka>
Date: Wed, 22 Nov 2023 11:23:24 +0100
From: Michal Hocko <mhocko@...e.com>
To: Dmitry Rokosov <ddrokosov@...utedevices.com>
Cc: rostedt@...dmis.org, mhiramat@...nel.org, hannes@...xchg.org,
roman.gushchin@...ux.dev, shakeelb@...gle.com,
muchun.song@...ux.dev, akpm@...ux-foundation.org,
kernel@...rdevices.ru, rockosov@...il.com, cgroups@...r.kernel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
bpf@...r.kernel.org
Subject: Re: [PATCH v2 2/2] mm: memcg: introduce new event to trace
shrink_memcg
On Wed 22-11-23 13:01:56, Dmitry Rokosov wrote:
> The shrink_memcg flow plays a crucial role in memcg reclamation.
> Currently, it is not possible to trace this point from non-direct
> reclaim paths.
Is this really true? AFAICS we have
mm_vmscan_lru_isolate
mm_vmscan_lru_shrink_active
mm_vmscan_lru_shrink_inactive
which are in the vry core of the memory reclaim. Sure post processing
those is some work.
[...]
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 45780952f4b5..6d89b39d9a91 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -6461,6 +6461,12 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc)
> */
> cond_resched();
>
> +#ifdef CONFIG_MEMCG
> + trace_mm_vmscan_memcg_shrink_begin(sc->order,
> + sc->gfp_mask,
> + memcg);
> +#endif
this is a common code path for node and direct reclaim which means that
we will have multiple begin/end tracepoints covering similar operations.
To me that sounds excessive. If you are missing a cumulative kswapd
alternative to
mm_vmscan_direct_reclaim_begin
mm_vmscan_direct_reclaim_end
mm_vmscan_memcg_reclaim_begin
mm_vmscan_memcg_reclaim_end
mm_vmscan_memcg_softlimit_reclaim_begin
mm_vmscan_memcg_softlimit_reclaim_end
mm_vmscan_node_reclaim_begin
mm_vmscan_node_reclaim_end
then place it into kswapd path. But it would be really great to
elaborate some more why this is really needed. Cannot you simply
aggregate stats for kswapd from existing tracepoints?
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists