lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <875xcn526v.fsf@linux.dev>
Date: Thu, 09 Oct 2025 12:02:00 -0700
From: Roman Gushchin <roman.gushchin@...ux.dev>
To: Shakeel Butt <shakeel.butt@...ux.dev>
Cc: Daniel Sedlak <daniel.sedlak@...77.com>,  "David S. Miller"
 <davem@...emloft.net>,  Eric Dumazet <edumazet@...gle.com>,  Jakub
 Kicinski <kuba@...nel.org>,  Paolo Abeni <pabeni@...hat.com>,  Simon
 Horman <horms@...nel.org>,  Jonathan Corbet <corbet@....net>,  Neal
 Cardwell <ncardwell@...gle.com>,  Kuniyuki Iwashima <kuniyu@...gle.com>,
  David Ahern <dsahern@...nel.org>,  Andrew Morton
 <akpm@...ux-foundation.org>,  Yosry Ahmed <yosry.ahmed@...ux.dev>,
  linux-mm@...ck.org,  netdev@...r.kernel.org,  Johannes Weiner
 <hannes@...xchg.org>,  Michal Hocko <mhocko@...nel.org>,  Muchun Song
 <muchun.song@...ux.dev>,  cgroups@...r.kernel.org,  Tejun Heo
 <tj@...nel.org>,  Michal Koutný <mkoutny@...e.com>,
  Matyas Hurtik
 <matyas.hurtik@...77.com>
Subject: Re: [PATCH v5] memcg: expose socket memory pressure in a cgroup

Shakeel Butt <shakeel.butt@...ux.dev> writes:

> On Thu, Oct 09, 2025 at 10:58:51AM -0700, Roman Gushchin wrote:
>> Shakeel Butt <shakeel.butt@...ux.dev> writes:
>> 
>> > On Thu, Oct 09, 2025 at 08:32:27AM -0700, Roman Gushchin wrote:
>> >> Daniel Sedlak <daniel.sedlak@...77.com> writes:
>> >> 
>> >> > Hi Roman,
>> >> >
>> >> > On 10/8/25 8:58 PM, Roman Gushchin wrote:
>> >> >>> This patch exposes a new file for each cgroup in sysfs which is a
>> >> >>> read-only single value file showing how many microseconds this cgroup
>> >> >>> contributed to throttling the throughput of network sockets. The file is
>> >> >>> accessible in the following path.
>> >> >>>
>> >> >>>    /sys/fs/cgroup/**/<cgroup name>/memory.net.throttled_usec
>> >> >> Hi Daniel!
>> >> >> How this value is going to be used? In other words, do you need an
>> >> >> exact number or something like memory.events::net_throttled would be
>> >> >> enough for your case?
>> >> >
>> >> > Just incrementing a counter each time the vmpressure() happens IMO
>> >> > provides bad semantics of what is actually happening, because it can
>> >> > hide important details, mainly the _time_ for how long the network
>> >> > traffic was slowed down.
>> >> >
>> >> > For example, when memory.events::net_throttled=1000, it can mean that
>> >> > the network was slowed down for 1 second or 1000 seconds or something
>> >> > between, and the memory.net.throttled_usec proposed by this patch
>> >> > disambiguates it.
>> >> >
>> >> > In addition, v1/v2 of this series started that way, then from v3 we
>> >> > rewrote it to calculate the duration instead, which proved to be
>> >> > better information for debugging, as it is easier to understand
>> >> > implications.
>> >> 
>> >> But how are you planning to use this information? Is this just
>> >> "networking is under pressure for non-trivial amount of time ->
>> >> raise the memcg limit" or something more complicated?
>> >> 
>> >> I am bit concerned about making this metric the part of cgroup API
>> >> simple because it's too implementation-defined and in my opinion
>> >> lack the fundamental meaning.
>> >> 
>> >> Vmpressure is calculated based on scanned/reclaimed ratio (which is
>> >> also not always the best proxy for the memory pressure level), then
>> >> if it reaches some level we basically throttle networking for 1s.
>> >> So it's all very arbitrary.
>> >> 
>> >> I totally get it from the debugging perspective, but not sure about
>> >> usefulness of it as a permanent metric. This is why I'm asking if there
>> >> are lighter alternatives, e.g. memory.events or maybe even tracepoints.
>> >> 
>> >
>> > I also have a very similar opinion that if we expose the current
>> > implementation detail through a stable interface, we might get stuck
>> > with this implementation and I want to change this in future.
>> >
>> > Coming back to what information should we expose that will be helpful
>> > for Daniel & Matyas and will be beneficial in general. After giving some
>> > thought, I think the time "network was slowed down" or more specifically
>> > time window when mem_cgroup_sk_under_memory_pressure() returns true
>> > might not be that useful without the actual network activity. Basically
>> > if no one is calling mem_cgroup_sk_under_memory_pressure() and doing
>> > some actions, the time window is not that useful.
>> >
>> > How about we track the actions taken by the callers of
>> > mem_cgroup_sk_under_memory_pressure()? Basically if network stack
>> > reduces the buffer size or whatever the other actions it may take when
>> > mem_cgroup_sk_under_memory_pressure() returns, tracking those actions
>> > is what I think is needed here, at least for the debugging use-case.
>> >
>> > WDYT?
>> 
>> I feel like if it's mostly intended for debugging purposes,
>> a combination of a trace point and bpftrace can work pretty well,
>> so there is no need to create a new sysfs interface.
>> 
>
> Definitely not a new interface but I think having such information in
> memory.events or memory.stat would be more convenient. Basically the
> number of times the sockets in this memcg have to be clamped due to
> memory pressure would be useful in general.

Yeah, if we're going to add something, memory.events looks like the best
option, also because it allows to poll and get notified when the event
occurs.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ