[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0a5482ad-3625-4c22-9eef-574eabd7c2bf@redhat.com>
Date: Tue, 23 Sep 2025 10:26:21 +0200
From: David Hildenbrand <david@...hat.com>
To: xu.xin16@....com.cn, akpm@...ux-foundation.org, shakeel.butt@...ux.dev,
hannes@...xchg.org, mhocko@...nel.org, roman.gushchin@...ux.dev
Cc: chengming.zhou@...ux.dev, muchun.song@...ux.dev,
linux-kernel@...r.kernel.org, linux-mm@...ck.org, cgroups@...r.kernel.org
Subject: Re: [PATCH linux-next v3 0/6] memcg: Support per-memcg KSM metrics
On 21.09.25 17:07, xu.xin16@....com.cn wrote:
> From: xu xin <xu.xin16@....com.cn>
>
> v2->v3:
> ------
> Some fixes of compilation error due to missed inclusion of header or missed
> function definition on some kernel config.
> https://lore.kernel.org/all/202509142147.WQI0impC-lkp@intel.com/
> https://lore.kernel.org/all/202509142046.QatEaTQV-lkp@intel.com/
>
> v1->v2:
> ------
> According to Shakeel's suggestion, expose these metric item into memory.stat
> instead of a new interface.
> https://lore.kernel.org/all/ir2s6sqi6hrbz7ghmfngbif6fbgmswhqdljlntesurfl2xvmmv@yp3w2lqyipb5/
>
> Background
> ==========
>
> With the enablement of container-level KSM (e.g., via prctl [1]), there is
> a growing demand for container-level observability of KSM behavior. However,
> current cgroup implementations lack support for exposing KSM-related metrics.
>
> So add the counter in the existing memory.stat without adding a new interface.
> To diaplay per-memcg KSM statistic counters, we traverse all processes of a
> memcg and summing the processes' ksm_rmap_items counters instead of adding enum
> item in memcg_stat_item or node_stat_item and updating the corresponding enum
> counter when ksmd manipulate pages.
>
> Now Linux users can look up all per-memcg KSM counters by:
>
> # cat /sys/fs/cgroup/xuxin/memory.stat | grep ksm
> ksm_rmap_items 0
> ksm_zero_pages 0
> ksm_merging_pages 0
> ksm_profit 0
No strong opinion from my side: seems to mostly only collect stats from
all tasks to summarize them per memcg.
--
Cheers
David / dhildenb
Powered by blists - more mailing lists