[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <rxgfvctb5a5plo2o54uegyocmofdcxfxfwwjsn2lrgazdxxbnc@b4xdyfsuplwd>
Date: Wed, 19 Mar 2025 11:33:10 +0100
From: Michal Koutný <mkoutny@...e.com>
To: Hao Jia <jiahao1@...iang.com>
Cc: Hao Jia <jiahao.kernel@...il.com>, hannes@...xchg.org,
akpm@...ux-foundation.org, tj@...nel.org, corbet@....net, mhocko@...nel.org,
roman.gushchin@...ux.dev, shakeel.butt@...ux.dev, muchun.song@...ux.dev,
cgroups@...r.kernel.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
linux-doc@...r.kernel.org
Subject: Re: [PATCH 1/2] mm: vmscan: Split proactive reclaim statistics from
direct reclaim statistics
On Wed, Mar 19, 2025 at 05:49:15PM +0800, Hao Jia <jiahao1@...iang.com> wrote:
> root
> `- a `- b`- c
>
> We have a userspace proactive memory reclaim process that writes to
> a/memory.reclaim, observes a/memory.stat, then writes to
> b/memory.reclaim and observes b/memory.stat. This pattern is the same
> for other cgroups as well, so all memory cgroups(a, b, c) have the
> **same writer**. So, I need per-cgroup proactive memory reclaim statistics.
Sorry for unclarity, it got lost among the mails. Originally, I thought
about each write(2) but in reality it'd be per each FD. Similar to how
memory.peak allows seeing different values. WDYT?
Michal
Download attachment "signature.asc" of type "application/pgp-signature" (229 bytes)
Powered by blists - more mailing lists