lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAVpQUBrNTFw34Kkh=b2bpa8aKd4XSnZUa6a18zkMjVrBqNHWw@mail.gmail.com>
Date: Wed, 6 Aug 2025 12:20:25 -0700
From: Kuniyuki Iwashima <kuniyu@...gle.com>
To: Shakeel Butt <shakeel.butt@...ux.dev>
Cc: Daniel Sedlak <daniel.sedlak@...77.com>, "David S. Miller" <davem@...emloft.net>, 
	Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, 
	Simon Horman <horms@...nel.org>, Jonathan Corbet <corbet@....net>, Neal Cardwell <ncardwell@...gle.com>, 
	David Ahern <dsahern@...nel.org>, Andrew Morton <akpm@...ux-foundation.org>, 
	Yosry Ahmed <yosry.ahmed@...ux.dev>, linux-mm@...ck.org, netdev@...r.kernel.org, 
	Johannes Weiner <hannes@...xchg.org>, Michal Hocko <mhocko@...nel.org>, 
	Roman Gushchin <roman.gushchin@...ux.dev>, Muchun Song <muchun.song@...ux.dev>, 
	cgroups@...r.kernel.org, Tejun Heo <tj@...nel.org>, 
	Michal Koutný <mkoutny@...e.com>, 
	Matyas Hurtik <matyas.hurtik@...77.com>
Subject: Re: [PATCH v4] memcg: expose socket memory pressure in a cgroup

On Tue, Aug 5, 2025 at 4:02 PM Shakeel Butt <shakeel.butt@...ux.dev> wrote:
>
> On Tue, Aug 05, 2025 at 08:44:29AM +0200, Daniel Sedlak wrote:
> > This patch is a result of our long-standing debug sessions, where it all
> > started as "networking is slow", and TCP network throughput suddenly
> > dropped from tens of Gbps to few Mbps, and we could not see anything in
> > the kernel log or netstat counters.
> >
> > Currently, we have two memory pressure counters for TCP sockets [1],
> > which we manipulate only when the memory pressure is signalled through
> > the proto struct [2]. However, the memory pressure can also be signaled
> > through the cgroup memory subsystem, which we do not reflect in the
> > netstat counters. In the end, when the cgroup memory subsystem signals
> > that it is under pressure, we silently reduce the advertised TCP window
> > with tcp_adjust_rcv_ssthresh() to 4*advmss, which causes a significant
> > throughput reduction.
> >
> > Keep in mind that when the cgroup memory subsystem signals the socket
> > memory pressure, it affects all sockets used in that cgroup.
> >
> > This patch exposes a new file for each cgroup in sysfs which signals
> > the cgroup socket memory pressure. The file is accessible in
> > the following path.
> >
> >   /sys/fs/cgroup/**/<cgroup name>/memory.net.socket_pressure
>
> let's keep the name concise. Maybe memory.net.pressure?
>
> >
> > The output value is a cumulative sum of microseconds spent
> > under pressure for that particular cgroup.
> >
> > Link: https://elixir.bootlin.com/linux/v6.15.4/source/include/uapi/linux/snmp.h#L231-L232 [1]
> > Link: https://elixir.bootlin.com/linux/v6.15.4/source/include/net/sock.h#L1300-L1301 [2]
> > Co-developed-by: Matyas Hurtik <matyas.hurtik@...77.com>
> > Signed-off-by: Matyas Hurtik <matyas.hurtik@...77.com>
> > Signed-off-by: Daniel Sedlak <daniel.sedlak@...77.com>
> > ---
> > Changes:
> > v3 -> v4:
> > - Add documentation
> > - Expose pressure as cummulative counter in microseconds
> > - Link to v3: https://lore.kernel.org/netdev/20250722071146.48616-1-daniel.sedlak@cdn77.com/
> >
> > v2 -> v3:
> > - Expose the socket memory pressure on the cgroups instead of netstat
> > - Split patch
> > - Link to v2: https://lore.kernel.org/netdev/20250714143613.42184-1-daniel.sedlak@cdn77.com/
> >
> > v1 -> v2:
> > - Add tracepoint
> > - Link to v1: https://lore.kernel.org/netdev/20250707105205.222558-1-daniel.sedlak@cdn77.com/
> >
> >  Documentation/admin-guide/cgroup-v2.rst |  7 +++++++
> >  include/linux/memcontrol.h              |  2 ++
> >  mm/memcontrol.c                         | 15 +++++++++++++++
> >  mm/vmpressure.c                         |  9 ++++++++-
> >  4 files changed, 32 insertions(+), 1 deletion(-)
> >
> > diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
> > index 0cc35a14afbe..c810b449fb3d 100644
> > --- a/Documentation/admin-guide/cgroup-v2.rst
> > +++ b/Documentation/admin-guide/cgroup-v2.rst
> > @@ -1884,6 +1884,13 @@ The following nested keys are defined.
> >       Shows pressure stall information for memory. See
> >       :ref:`Documentation/accounting/psi.rst <psi>` for details.
> >
> > +  memory.net.socket_pressure
> > +     A read-only single value file showing how many microseconds
> > +     all sockets within that cgroup spent under pressure.
> > +
> > +     Note that when the sockets are under pressure, the networking
> > +     throughput can be significantly degraded.
> > +
> >
> >  Usage Guidelines
> >  ~~~~~~~~~~~~~~~~
> > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > index 87b6688f124a..6a1cb9a99b88 100644
> > --- a/include/linux/memcontrol.h
> > +++ b/include/linux/memcontrol.h
> > @@ -252,6 +252,8 @@ struct mem_cgroup {
> >        * where socket memory is accounted/charged separately.
> >        */
> >       unsigned long           socket_pressure;
> > +     /* exported statistic for memory.net.socket_pressure */
> > +     unsigned long           socket_pressure_duration;
>
> I think atomic_long_t would be better.
>
> >
> >       int kmemcg_id;
> >       /*
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index 902da8a9c643..8e299d94c073 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -3758,6 +3758,7 @@ static struct mem_cgroup *mem_cgroup_alloc(struct mem_cgroup *parent)
> >       INIT_LIST_HEAD(&memcg->swap_peaks);
> >       spin_lock_init(&memcg->peaks_lock);
> >       memcg->socket_pressure = jiffies;
> > +     memcg->socket_pressure_duration = 0;
> >       memcg1_memcg_init(memcg);
> >       memcg->kmemcg_id = -1;
> >       INIT_LIST_HEAD(&memcg->objcg_list);
> > @@ -4647,6 +4648,15 @@ static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf,
> >       return nbytes;
> >  }
> >
> > +static int memory_socket_pressure_show(struct seq_file *m, void *v)
> > +{
> > +     struct mem_cgroup *memcg = mem_cgroup_from_seq(m);
> > +
> > +     seq_printf(m, "%lu\n", READ_ONCE(memcg->socket_pressure_duration));
> > +
> > +     return 0;
> > +}
> > +
> >  static struct cftype memory_files[] = {
> >       {
> >               .name = "current",
> > @@ -4718,6 +4728,11 @@ static struct cftype memory_files[] = {
> >               .flags = CFTYPE_NS_DELEGATABLE,
> >               .write = memory_reclaim,
> >       },
> > +     {
> > +             .name = "net.socket_pressure",
> > +             .flags = CFTYPE_NOT_ON_ROOT,
> > +             .seq_show = memory_socket_pressure_show,
> > +     },
> >       { }     /* terminate */
> >  };
> >
> > diff --git a/mm/vmpressure.c b/mm/vmpressure.c
> > index bd5183dfd879..1e767cd8aa08 100644
> > --- a/mm/vmpressure.c
> > +++ b/mm/vmpressure.c
> > @@ -308,6 +308,8 @@ void vmpressure(gfp_t gfp, struct mem_cgroup *memcg, bool tree,
> >               level = vmpressure_calc_level(scanned, reclaimed);
> >
> >               if (level > VMPRESSURE_LOW) {
> > +                     unsigned long socket_pressure;
> > +                     unsigned long jiffies_diff;
> >                       /*
> >                        * Let the socket buffer allocator know that
> >                        * we are having trouble reclaiming LRU pages.
> > @@ -316,7 +318,12 @@ void vmpressure(gfp_t gfp, struct mem_cgroup *memcg, bool tree,
> >                        * asserted for a second in which subsequent
> >                        * pressure events can occur.
> >                        */
> > -                     WRITE_ONCE(memcg->socket_pressure, jiffies + HZ);
> > +                     socket_pressure = jiffies + HZ;
> > +
> > +                     jiffies_diff = min(socket_pressure - READ_ONCE(memcg->socket_pressure), HZ);
> > +                     memcg->socket_pressure_duration += jiffies_to_usecs(jiffies_diff);
>
> KCSAN will complain about this. I think we can use atomic_long_add() and
> don't need the one with strict ordering.

Assuming from atomic_ that vmpressure() could be called concurrently
for the same memcg, should we protect socket_pressure and duration
within the same lock instead of mixing WRITE/READ_ONCE() and
atomic?  Otherwise jiffies_diff could be incorrect (the error is smaller
than HZ though).


>
> > +
> > +                     WRITE_ONCE(memcg->socket_pressure, socket_pressure);
> >               }
> >       }
> >  }
> >
> > base-commit: e96ee511c906c59b7c4e6efd9d9b33917730e000
> > --
> > 2.39.5
> >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ