[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <kiqfxuepou5lwqffxhdshau5lw6bkkrvshv4ekhvrmugweipau@rreefm2uttjp>
Date: Mon, 11 Aug 2025 14:31:26 -0700
From: Shakeel Butt <shakeel.butt@...ux.dev>
To: Tejun Heo <tj@...nel.org>
Cc: Daniel Sedlak <daniel.sedlak@...77.com>,
"David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, Simon Horman <horms@...nel.org>,
Jonathan Corbet <corbet@....net>, Neal Cardwell <ncardwell@...gle.com>,
Kuniyuki Iwashima <kuniyu@...gle.com>, David Ahern <dsahern@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>, Yosry Ahmed <yosry.ahmed@...ux.dev>, linux-mm@...ck.org,
netdev@...r.kernel.org, Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>, Roman Gushchin <roman.gushchin@...ux.dev>,
Muchun Song <muchun.song@...ux.dev>, cgroups@...r.kernel.org,
Michal Koutný <mkoutny@...e.com>, Matyas Hurtik <matyas.hurtik@...77.com>
Subject: Re: [PATCH v4] memcg: expose socket memory pressure in a cgroup
On Sat, Aug 09, 2025 at 08:32:22AM -1000, Tejun Heo wrote:
> Hello,
>
> On Tue, Aug 05, 2025 at 08:44:29AM +0200, Daniel Sedlak wrote:
> > This patch exposes a new file for each cgroup in sysfs which signals
> > the cgroup socket memory pressure. The file is accessible in
> > the following path.
> >
> > /sys/fs/cgroup/**/<cgroup name>/memory.net.socket_pressure
> >
> > The output value is a cumulative sum of microseconds spent
> > under pressure for that particular cgroup.
>
> I'm not sure the pressure name fits the best when the content is the
> duration. Note that in the memory.pressure file, the main content is
> time-averaged percentages which are the "pressure" numbers. Can this be an
> entry in memory.stat which signifies that it's a duration? net_throttle_us
> or something like that?
Good point and this can definitely be a metric exposed through
memory.stat. At the moment the metrics in memory.stat are either amount
of bytes or number of pages. Time duration would be the first one and
will need some work to make it part of rstat or we can explore to keep
this separate from rstat with manual upward sync on update side as it is
not performance critical (the read side seems performance critical for
this stat).
Powered by blists - more mailing lists