[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <t7q2d73nxdd75sghobnpmzi7bsbvden6lbrtejkxyoqfl2xilv@4ewvm2od2sf3>
Date: Thu, 8 May 2025 11:56:19 -0700
From: Shakeel Butt <shakeel.butt@...ux.dev>
To: xu.xin16@....com.cn
Cc: akpm@...ux-foundation.org, david@...hat.com,
linux-kernel@...r.kernel.org, wang.yaxin@....com.cn, linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org, yang.yang29@....com.cn
Subject: Re: [PATCH v2 0/9] support ksm_stat showing at cgroup level
On Tue, May 06, 2025 at 01:09:25PM +0800, xu.xin16@....com.cn wrote:
> > > Users can obtain the KSM information of a cgroup just by:
> > >
> > > # cat /sys/fs/cgroup/memory.ksm_stat
> > > ksm_rmap_items 76800
> > > ksm_zero_pages 0
> > > ksm_merging_pages 76800
> > > ksm_process_profit 309657600
> > >
> > > Current implementation supports both cgroup v2 and cgroup v1.
> > >
> >
> > Before adding these stats to memcg, add global stats for them in
> > enum node_stat_item and then you can expose them in memcg through
> > memory.stat instead of a new interface.
>
> Dear shakeel.butt,
>
> If adding these ksm-related items to enum node_stat_item and bringing extra counters-updating
> code like __lruvec_stat_add_folio()... embedded into KSM procudure, it increases extra
> CPU-consuming while normal KSM procedures happen.
How is it more expensive than traversing all processes?
__lruvec_stat_add_folio() and related functions are already called in many
performance critical code paths, so I don't see any issue to call in the
ksm.
> Or, we can just traversal all processes of
> this memcg and sum their ksm'counters like the current patche set implmentation.
>
> If only including a single "KSM merged pages" entry in memory.stat, I think it is reasonable as
> it reflects this memcg's KSM page count. However, adding the other three KSM-related metrics is
> less advisable since they are strongly coupled with KSM internals and would primarily interest
> users monitoring KSM-specific behavior.
We can discuss and decide each individual ksm stat if it makes sense to
added to memcg or not.
>
> Last but not least, the rationale for adding a ksm_stat entry to memcg also lies in maintaining
> structural consistency with the existing /proc/<pid>/ksm_stat interface.
Sorry, I don't agree with this rationale. This is a separate interface
and can be different from exisiting ksm interface. We can define however
we think is right way to do for memcg and yes there can be stats overlap
with older interface.
For now I would say start with the ksm metrics that are appropriate to
be exposed globally and then we can see if those are fine for memcg as
well.
Powered by blists - more mailing lists