[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <jj5w7cpjjyzxasuweiz64jqqxcz23tm75ca22h3wvfj3u4aums@gnjarnf5gpgq>
Date: Tue, 22 Jul 2025 13:11:05 -0700
From: Shakeel Butt <shakeel.butt@...ux.dev>
To: Kuniyuki Iwashima <kuniyu@...gle.com>
Cc: Michal Koutný <mkoutny@...e.com>,
Daniel Sedlak <daniel.sedlak@...77.com>, "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>, Simon Horman <horms@...nel.org>,
Jonathan Corbet <corbet@....net>, Neal Cardwell <ncardwell@...gle.com>,
David Ahern <dsahern@...nel.org>, Andrew Morton <akpm@...ux-foundation.org>,
Yosry Ahmed <yosry.ahmed@...ux.dev>, linux-mm@...ck.org, netdev@...r.kernel.org,
Johannes Weiner <hannes@...xchg.org>, Michal Hocko <mhocko@...nel.org>,
Roman Gushchin <roman.gushchin@...ux.dev>, Muchun Song <muchun.song@...ux.dev>, cgroups@...r.kernel.org,
Matyas Hurtik <matyas.hurtik@...77.com>
Subject: Re: [PATCH v3] memcg: expose socket memory pressure in a cgroup
On Tue, Jul 22, 2025 at 12:58:17PM -0700, Kuniyuki Iwashima wrote:
> On Tue, Jul 22, 2025 at 12:05 PM Shakeel Butt <shakeel.butt@...ux.dev> wrote:
> >
> > On Tue, Jul 22, 2025 at 11:27:39AM -0700, Kuniyuki Iwashima wrote:
> > > On Tue, Jul 22, 2025 at 10:50 AM Shakeel Butt <shakeel.butt@...ux.dev> wrote:
> > > >
> > > > On Tue, Jul 22, 2025 at 10:57:31AM +0200, Michal Koutný wrote:
> > > > > Hello Daniel.
> > > > >
> > > > > On Tue, Jul 22, 2025 at 09:11:46AM +0200, Daniel Sedlak <daniel.sedlak@...77.com> wrote:
> > > > > > /sys/fs/cgroup/**/<cgroup name>/memory.net.socket_pressure
> > > > > >
> > > > > > The output value is an integer matching the internal semantics of the
> > > > > > struct mem_cgroup for socket_pressure. It is a periodic re-arm clock,
> > > > > > representing the end of the said socket memory pressure, and once the
> > > > > > clock is re-armed it is set to jiffies + HZ.
> > > > >
> > > > > I don't find it ideal to expose this value in its raw form that is
> > > > > rather an implementation detail.
> > > > >
> > > > > IIUC, the information is possibly valid only during one jiffy interval.
> > > > > How would be the userspace consuming this?
> > > > >
> > > > > I'd consider exposing this as a cummulative counter in memory.stat for
> > > > > simplicity (or possibly cummulative time spent in the pressure
> > > > > condition).
> > > > >
> > > > > Shakeel, how useful is this vmpressure per-cgroup tracking nowadays? I
> > > > > thought it's kind of legacy.
> > > >
> > > >
> > > > Yes vmpressure is legacy and we should not expose raw underlying number
> > > > to the userspace. How about just 0 or 1 and use
> > > > mem_cgroup_under_socket_pressure() underlying? In future if we change
> > > > the underlying implementation, the output of this interface should be
> > > > consistent.
> > >
> > > But this is available only for 1 second, and it will not be useful
> > > except for live debugging ?
> >
> > 1 second is the current implementation and it can be more if the memcg
> > remains in memory pressure. Regarding usefullness I think the periodic
> > stat collectors (like cadvisor or Google's internal borglet+rumbo) would
> > be interested in scraping this interface.
>
> I think the cumulative counter suggested above is better at least.
It is tied to the underlying implementation. If we decide to use, for
example, PSI in future, what should this interface show?
Powered by blists - more mailing lists