[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aOVxrwQ8MHbaRk6J@slm.duckdns.org>
Date: Tue, 7 Oct 2025 10:01:51 -1000
From: Tejun Heo <tj@...nel.org>
To: Daniel Sedlak <daniel.sedlak@...77.com>
Cc: "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
Simon Horman <horms@...nel.org>, Jonathan Corbet <corbet@....net>,
Neal Cardwell <ncardwell@...gle.com>,
Kuniyuki Iwashima <kuniyu@...gle.com>,
David Ahern <dsahern@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Shakeel Butt <shakeel.butt@...ux.dev>,
Yosry Ahmed <yosry.ahmed@...ux.dev>, linux-mm@...ck.org,
netdev@...r.kernel.org, Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Muchun Song <muchun.song@...ux.dev>, cgroups@...r.kernel.org,
Michal Koutný <mkoutny@...e.com>,
Matyas Hurtik <matyas.hurtik@...77.com>
Subject: Re: [PATCH v5] memcg: expose socket memory pressure in a cgroup
On Tue, Oct 07, 2025 at 02:50:56PM +0200, Daniel Sedlak wrote:
...
> 1) None - keeping the reported duration local to that cgroup:
> 2) Propagating the duration upwards (using rstat or simple iteration
> 3) Propagating the duration downwards (write only locally,
> read traversing hierarchy upwards):
...
> We chose variant 1, that is why it is a separate file instead of another
> counter in mem.stat. Variant 2 seems to be most fitting however the
> calculated value would be misleading and hard to interpret. Ideally, we
> would go with variant 3 as this mirrors the logic of
> mem_cgroup_under_socket_pressure(), but the third variant can be also
> calculated manually from variant 1, and thus we chose the variant 1
> as it is the most versatile one without leaking the internal
> implementation that can change in the future.
I'm not against going 1) but let's not do a separate file for this. Can't
you do memory.stat.local? It'd be better to have aggregation in memory.stat
but we can worry about that later.
Thanks.
--
tejun
Powered by blists - more mailing lists