lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231122105836.xhlgbwmwjdwd3g5v@CAB-WSD-L081021>
Date:   Wed, 22 Nov 2023 13:58:36 +0300
From:   Dmitry Rokosov <ddrokosov@...utedevices.com>
To:     Michal Hocko <mhocko@...e.com>
CC:     <rostedt@...dmis.org>, <mhiramat@...nel.org>, <hannes@...xchg.org>,
        <roman.gushchin@...ux.dev>, <shakeelb@...gle.com>,
        <muchun.song@...ux.dev>, <akpm@...ux-foundation.org>,
        <kernel@...rdevices.ru>, <rockosov@...il.com>,
        <cgroups@...r.kernel.org>, <linux-mm@...ck.org>,
        <linux-kernel@...r.kernel.org>, <bpf@...r.kernel.org>
Subject: Re: [PATCH v2 2/2] mm: memcg: introduce new event to trace
 shrink_memcg

Hello Michal,

Thank you for the quick review!

On Wed, Nov 22, 2023 at 11:23:24AM +0100, Michal Hocko wrote:
> On Wed 22-11-23 13:01:56, Dmitry Rokosov wrote:
> > The shrink_memcg flow plays a crucial role in memcg reclamation.
> > Currently, it is not possible to trace this point from non-direct
> > reclaim paths.
> 
> Is this really true? AFAICS we have
> mm_vmscan_lru_isolate
> mm_vmscan_lru_shrink_active
> mm_vmscan_lru_shrink_inactive
> 
> which are in the vry core of the memory reclaim. Sure post processing
> those is some work.

Sure, you are absolutely right. In the usual scenario, the memcg
shrinker utilizes two sub-shrinkers: slab and LRU. We can enable the
tracepoints you mentioned and analyze them. However, there is one
potential issue. Enabling these tracepoints will trigger the reclaim
events show for all pages. Although we can filter them per pid, we
cannot filter them per cgroup. Nevertheless, there are times when it
would be extremely beneficial to comprehend the effectiveness of the
reclaim process within the relevant cgroup. For this reason, I am adding
the cgroup name to the memcg tracepoints and implementing a cumulative
tracepoint for memcg shrink (LRU + slab)."

> 
> [...]
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 45780952f4b5..6d89b39d9a91 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -6461,6 +6461,12 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc)
> >  		 */
> >  		cond_resched();
> >  
> > +#ifdef CONFIG_MEMCG
> > +		trace_mm_vmscan_memcg_shrink_begin(sc->order,
> > +						   sc->gfp_mask,
> > +						   memcg);
> > +#endif
> 
> this is a common code path for node and direct reclaim which means that
> we will have multiple begin/end tracepoints covering similar operations.
> To me that sounds excessive. If you are missing a cumulative kswapd
> alternative to 
> mm_vmscan_direct_reclaim_begin
> mm_vmscan_direct_reclaim_end
> mm_vmscan_memcg_reclaim_begin
> mm_vmscan_memcg_reclaim_end
> mm_vmscan_memcg_softlimit_reclaim_begin
> mm_vmscan_memcg_softlimit_reclaim_end
> mm_vmscan_node_reclaim_begin
> mm_vmscan_node_reclaim_end
> 
> then place it into kswapd path. But it would be really great to
> elaborate some more why this is really needed. Cannot you simply
> aggregate stats for kswapd from existing tracepoints?
> 
> -- 
> Michal Hocko
> SUSE Labs

-- 
Thank you,
Dmitry

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ