[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250506130925568unpXQ7vLOEaRX4iDWSow2@zte.com.cn>
Date: Tue, 6 May 2025 13:09:25 +0800 (CST)
From: <xu.xin16@....com.cn>
To: <shakeel.butt@...ux.dev>
Cc: <akpm@...ux-foundation.org>, <david@...hat.com>,
<linux-kernel@...r.kernel.org>, <wang.yaxin@....com.cn>,
<linux-mm@...ck.org>, <linux-fsdevel@...r.kernel.org>,
<yang.yang29@....com.cn>
Subject: Re: [PATCH v2 0/9] support ksm_stat showing at cgroup level
> > Users can obtain the KSM information of a cgroup just by:
> >
> > # cat /sys/fs/cgroup/memory.ksm_stat
> > ksm_rmap_items 76800
> > ksm_zero_pages 0
> > ksm_merging_pages 76800
> > ksm_process_profit 309657600
> >
> > Current implementation supports both cgroup v2 and cgroup v1.
> >
>
> Before adding these stats to memcg, add global stats for them in
> enum node_stat_item and then you can expose them in memcg through
> memory.stat instead of a new interface.
Dear shakeel.butt,
If adding these ksm-related items to enum node_stat_item and bringing extra counters-updating
code like __lruvec_stat_add_folio()... embedded into KSM procudure, it increases extra
CPU-consuming while normal KSM procedures happen. Or, we can just traversal all processes of
this memcg and sum their ksm'counters like the current patche set implmentation.
If only including a single "KSM merged pages" entry in memory.stat, I think it is reasonable as
it reflects this memcg's KSM page count. However, adding the other three KSM-related metrics is
less advisable since they are strongly coupled with KSM internals and would primarily interest
users monitoring KSM-specific behavior.
Last but not least, the rationale for adding a ksm_stat entry to memcg also lies in maintaining
structural consistency with the existing /proc/<pid>/ksm_stat interface.
Powered by blists - more mailing lists