[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAVpQUCOwFksmo72p_nkr1uJMLRcRo1VAneADon9OxDLoRH0KA@mail.gmail.com>
Date: Tue, 22 Jul 2025 12:58:17 -0700
From: Kuniyuki Iwashima <kuniyu@...gle.com>
To: Shakeel Butt <shakeel.butt@...ux.dev>
Cc: Michal Koutný <mkoutny@...e.com>,
Daniel Sedlak <daniel.sedlak@...77.com>, "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
Simon Horman <horms@...nel.org>, Jonathan Corbet <corbet@....net>, Neal Cardwell <ncardwell@...gle.com>,
David Ahern <dsahern@...nel.org>, Andrew Morton <akpm@...ux-foundation.org>,
Yosry Ahmed <yosry.ahmed@...ux.dev>, linux-mm@...ck.org, netdev@...r.kernel.org,
Johannes Weiner <hannes@...xchg.org>, Michal Hocko <mhocko@...nel.org>,
Roman Gushchin <roman.gushchin@...ux.dev>, Muchun Song <muchun.song@...ux.dev>,
cgroups@...r.kernel.org, Matyas Hurtik <matyas.hurtik@...77.com>
Subject: Re: [PATCH v3] memcg: expose socket memory pressure in a cgroup
On Tue, Jul 22, 2025 at 12:05 PM Shakeel Butt <shakeel.butt@...ux.dev> wrote:
>
> On Tue, Jul 22, 2025 at 11:27:39AM -0700, Kuniyuki Iwashima wrote:
> > On Tue, Jul 22, 2025 at 10:50 AM Shakeel Butt <shakeel.butt@...ux.dev> wrote:
> > >
> > > On Tue, Jul 22, 2025 at 10:57:31AM +0200, Michal Koutný wrote:
> > > > Hello Daniel.
> > > >
> > > > On Tue, Jul 22, 2025 at 09:11:46AM +0200, Daniel Sedlak <daniel.sedlak@...77.com> wrote:
> > > > > /sys/fs/cgroup/**/<cgroup name>/memory.net.socket_pressure
> > > > >
> > > > > The output value is an integer matching the internal semantics of the
> > > > > struct mem_cgroup for socket_pressure. It is a periodic re-arm clock,
> > > > > representing the end of the said socket memory pressure, and once the
> > > > > clock is re-armed it is set to jiffies + HZ.
> > > >
> > > > I don't find it ideal to expose this value in its raw form that is
> > > > rather an implementation detail.
> > > >
> > > > IIUC, the information is possibly valid only during one jiffy interval.
> > > > How would be the userspace consuming this?
> > > >
> > > > I'd consider exposing this as a cummulative counter in memory.stat for
> > > > simplicity (or possibly cummulative time spent in the pressure
> > > > condition).
> > > >
> > > > Shakeel, how useful is this vmpressure per-cgroup tracking nowadays? I
> > > > thought it's kind of legacy.
> > >
> > >
> > > Yes vmpressure is legacy and we should not expose raw underlying number
> > > to the userspace. How about just 0 or 1 and use
> > > mem_cgroup_under_socket_pressure() underlying? In future if we change
> > > the underlying implementation, the output of this interface should be
> > > consistent.
> >
> > But this is available only for 1 second, and it will not be useful
> > except for live debugging ?
>
> 1 second is the current implementation and it can be more if the memcg
> remains in memory pressure. Regarding usefullness I think the periodic
> stat collectors (like cadvisor or Google's internal borglet+rumbo) would
> be interested in scraping this interface.
I think the cumulative counter suggested above is better at least.
If we poll such an interface periodically, the cumulative counter also
works, we can just calculate the delta. And even we don't need to
monitor that if it's not always needed but we can know if there was
memory pressure.
> If this is still not useful,
> what will be better? Some kind of trace which tracks the state of socket
> pressure state of a memcg (i.e. going into and out of pressure)?
Powered by blists - more mailing lists