lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240220212202.59ddc123@gandalf.local.home>
Date: Tue, 20 Feb 2024 21:22:02 -0500
From: Steven Rostedt <rostedt@...dmis.org>
To: Bixuan Cui <cuibixuan@...o.com>
Cc: akpm@...ux-foundation.org, mhiramat@...nel.org,
 mathieu.desnoyers@...icios.com, linux-kernel@...r.kernel.org,
 linux-trace-kernel@...r.kernel.org, linux-mm@...ck.org,
 opensource.kernel@...o.com
Subject: Re: [PATCH -next v6 0/2] Make memory reclamation measurable

On Wed, 21 Feb 2024 09:44:32 +0800
Bixuan Cui <cuibixuan@...o.com> wrote:

> ping~
> 

It's up to the memory management folks to decide on this.

-- Steve


> 在 2024/1/5 9:36, Bixuan Cui 写道:
> > When the system memory is low, kswapd reclaims the memory. The key steps
> > of memory reclamation include
> > 1.shrink_lruvec
> >    * shrink_active_list, moves folios from the active LRU to the inactive LRU
> >    * shrink_inactive_list, shrink lru from inactive LRU list
> > 2.shrink_slab
> >    * shrinker->count_objects(), calculates the freeable memory
> >    * shrinker->scan_objects(), reclaims the slab memory
> > 
> > The existing tracers in the vmscan are as follows:
> > 
> > --do_try_to_free_pages
> > --shrink_zones
> > --trace_mm_vmscan_node_reclaim_begin (tracer)
> > --shrink_node
> > --shrink_node_memcgs
> >    --trace_mm_vmscan_memcg_shrink_begin (tracer)
> >    --shrink_lruvec
> >      --shrink_list
> >        --shrink_active_list
> > 	  --trace_mm_vmscan_lru_shrink_active (tracer)
> >        --shrink_inactive_list
> > 	  --trace_mm_vmscan_lru_shrink_inactive (tracer)
> >      --shrink_active_list
> >    --shrink_slab
> >      --do_shrink_slab
> >      --shrinker->count_objects()
> >      --trace_mm_shrink_slab_start (tracer)
> >      --shrinker->scan_objects()
> >      --trace_mm_shrink_slab_end (tracer)
> >    --trace_mm_vmscan_memcg_shrink_end (tracer)
> > --trace_mm_vmscan_node_reclaim_end (tracer)
> > 
> > If we get the duration and quantity of shrink lru and slab,
> > then we can measure the memory recycling, as follows
> > 
> > Measuring memory reclamation with bpf:
> >    LRU FILE:
> > 	CPU COMM 	ShrinkActive(us) ShrinkInactive(us)  Reclaim(page)
> > 	7   kswapd0	 	26		51		32
> > 	7   kswapd0		52		47		13
> >    SLAB:
> > 	CPU COMM 		OBJ_NAME		Count_Dur(us) Freeable(page) Scan_Dur(us) Reclaim(page)
> > 	 1  kswapd0		super_cache_scan.cfi_jt     2		    341		   3225		128
> > 	 7  kswapd0		super_cache_scan.cfi_jt     0		    2247	   8524		1024
> > 	 7  kswapd0	        super_cache_scan.cfi_jt     2367	    0		   0		0
> > 
> > For this, add the new tracer to shrink_active_list/shrink_inactive_list
> > and shrinker->count_objects().
> > 
> > Changes:
> > v6: * Add Reviewed-by from Steven Rostedt.
> > v5: * Use 'DECLARE_EVENT_CLASS(mm_vmscan_lru_shrink_start_template' to
> > replace 'RACE_EVENT(mm_vmscan_lru_shrink_inactive/active_start'
> >      * Add the explanation for adding new shrink lru events into 'mm: vmscan: add new event to trace shrink lru'
> > v4: Add Reviewed-by and Changlog to every patch.
> > v3: Swap the positions of 'nid' and 'freeable' to prevent the hole in the trace event.
> > v2: Modify trace_mm_vmscan_lru_shrink_inactive() in evict_folios() at the same time to fix build error.
> > 
> > cuibixuan (2):
> >    mm: shrinker: add new event to trace shrink count
> >    mm: vmscan: add new event to trace shrink lru
> > 
> >   include/trace/events/vmscan.h | 80 ++++++++++++++++++++++++++++++++++-
> >   mm/shrinker.c                 |  4 ++
> >   mm/vmscan.c                   | 11 +++--
> >   3 files changed, 90 insertions(+), 5 deletions(-)
> >   


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ