[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAVpQUBgDVHwCzw_UJBeh_SLf=w547fKy9v-ke_Rw7Q-C4rhhg@mail.gmail.com>
Date: Tue, 22 Jul 2025 11:49:39 -0700
From: Kuniyuki Iwashima <kuniyu@...gle.com>
To: Waiman Long <llong@...hat.com>
Cc: Shakeel Butt <shakeel.butt@...ux.dev>, Michal Koutný <mkoutny@...e.com>,
Daniel Sedlak <daniel.sedlak@...77.com>, "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
Simon Horman <horms@...nel.org>, Jonathan Corbet <corbet@....net>, Neal Cardwell <ncardwell@...gle.com>,
David Ahern <dsahern@...nel.org>, Andrew Morton <akpm@...ux-foundation.org>,
Yosry Ahmed <yosry.ahmed@...ux.dev>, linux-mm@...ck.org, netdev@...r.kernel.org,
Johannes Weiner <hannes@...xchg.org>, Michal Hocko <mhocko@...nel.org>,
Roman Gushchin <roman.gushchin@...ux.dev>, Muchun Song <muchun.song@...ux.dev>,
cgroups@...r.kernel.org, Matyas Hurtik <matyas.hurtik@...77.com>
Subject: Re: [PATCH v3] memcg: expose socket memory pressure in a cgroup
On Tue, Jul 22, 2025 at 11:41 AM Waiman Long <llong@...hat.com> wrote:
>
>
> On 7/22/25 2:27 PM, Kuniyuki Iwashima wrote:
> > On Tue, Jul 22, 2025 at 10:50 AM Shakeel Butt <shakeel.butt@...ux.dev> wrote:
> >> On Tue, Jul 22, 2025 at 10:57:31AM +0200, Michal Koutný wrote:
> >>> Hello Daniel.
> >>>
> >>> On Tue, Jul 22, 2025 at 09:11:46AM +0200, Daniel Sedlak <daniel.sedlak@...77.com> wrote:
> >>>> /sys/fs/cgroup/**/<cgroup name>/memory.net.socket_pressure
> >>>>
> >>>> The output value is an integer matching the internal semantics of the
> >>>> struct mem_cgroup for socket_pressure. It is a periodic re-arm clock,
> >>>> representing the end of the said socket memory pressure, and once the
> >>>> clock is re-armed it is set to jiffies + HZ.
> >>> I don't find it ideal to expose this value in its raw form that is
> >>> rather an implementation detail.
> >>>
> >>> IIUC, the information is possibly valid only during one jiffy interval.
> >>> How would be the userspace consuming this?
> >>>
> >>> I'd consider exposing this as a cummulative counter in memory.stat for
> >>> simplicity (or possibly cummulative time spent in the pressure
> >>> condition).
> >>>
> >>> Shakeel, how useful is this vmpressure per-cgroup tracking nowadays? I
> >>> thought it's kind of legacy.
> >>
> >> Yes vmpressure is legacy and we should not expose raw underlying number
> >> to the userspace. How about just 0 or 1 and use
> >> mem_cgroup_under_socket_pressure() underlying? In future if we change
> >> the underlying implementation, the output of this interface should be
> >> consistent.
> > But this is available only for 1 second, and it will not be useful
> > except for live debugging ?
>
> If the new interface is used mainly for debugging purpose, I will
> suggest adding the CFTYPE_DEBUG flag so that it will only show up when
> "cgroup_debug" is specified in the kernel command line.
Sorry, I meant the signal that is available only for 1 second does not
help troubleshooting and we cannot get any hint from 0 _after_
something bad happens.
The flag works if the issue is more consistent or can be reproduced
and we can reboot, but it does not fit here. I guess the flag is for a
different use case ?
Powered by blists - more mailing lists