[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAVpQUDWKaB6jH3Ouyx35z5eUb9GKfgHS0H7ngcPEFeBdtPjRw@mail.gmail.com>
Date: Wed, 15 Oct 2025 11:21:17 -0700
From: Kuniyuki Iwashima <kuniyu@...gle.com>
To: Shakeel Butt <shakeel.butt@...ux.dev>
Cc: Daniel Sedlak <daniel.sedlak@...77.com>, Roman Gushchin <roman.gushchin@...ux.dev>,
"David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, Simon Horman <horms@...nel.org>,
Jonathan Corbet <corbet@....net>, Neal Cardwell <ncardwell@...gle.com>, David Ahern <dsahern@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>, Yosry Ahmed <yosry.ahmed@...ux.dev>, linux-mm@...ck.org,
netdev@...r.kernel.org, Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>, Muchun Song <muchun.song@...ux.dev>, cgroups@...r.kernel.org,
Tejun Heo <tj@...nel.org>, Michal Koutný <mkoutny@...e.com>,
Matyas Hurtik <matyas.hurtik@...77.com>
Subject: Re: [PATCH v5] memcg: expose socket memory pressure in a cgroup
On Tue, Oct 14, 2025 at 1:33 PM Shakeel Butt <shakeel.butt@...ux.dev> wrote:
>
> On Mon, Oct 13, 2025 at 04:30:53PM +0200, Daniel Sedlak wrote:
> [...]
> > > > > > How about we track the actions taken by the callers of
> > > > > > mem_cgroup_sk_under_memory_pressure()? Basically if network stack
> > > > > > reduces the buffer size or whatever the other actions it may take when
> > > > > > mem_cgroup_sk_under_memory_pressure() returns, tracking those actions
> > > > > > is what I think is needed here, at least for the debugging use-case.
> >
> > I am not against it, but I feel that conveying those tracked actions (or how
> > to represent them) to the user will be much harder. Are there already
> > existing APIs to push this information to the user?
> >
>
> I discussed with Wei Wang and she suggested we should start tracking the
> calls to tcp_adjust_rcv_ssthresh() first. So, something like the
> following. I would like feedback frm networking folks as well:
I think we could simply put memcg_memory_event() in
mem_cgroup_sk_under_memory_pressure() when it returns
true.
Other than tcp_adjust_rcv_ssthresh(), if tcp_under_memory_pressure()
returns true, it indicates something bad will happen, failure to expand
rcvbuf and sndbuf, need to prune out-of-order queue more aggressively,
FIN deferred to a retransmitted packet.
Also, we could cover mptcp and sctp too.
>
>
> From 54bd2bf6681c1c694295646532f2a62a205ee41a Mon Sep 17 00:00:00 2001
> From: Shakeel Butt <shakeel.butt@...ux.dev>
> Date: Tue, 14 Oct 2025 13:27:36 -0700
> Subject: [PATCH] memcg: track network throttling due to memcg memory pressure
>
> Signed-off-by: Shakeel Butt <shakeel.butt@...ux.dev>
> ---
> include/linux/memcontrol.h | 1 +
> mm/memcontrol.c | 2 ++
> net/ipv4/tcp_input.c | 5 ++++-
> net/ipv4/tcp_output.c | 8 ++++++--
> 4 files changed, 13 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 873e510d6f8d..5fe254813123 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -52,6 +52,7 @@ enum memcg_memory_event {
> MEMCG_SWAP_HIGH,
> MEMCG_SWAP_MAX,
> MEMCG_SWAP_FAIL,
> + MEMCG_SOCK_THROTTLED,
> MEMCG_NR_MEMORY_EVENTS,
> };
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 4deda33625f4..9207bba34e2e 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -4463,6 +4463,8 @@ static void __memory_events_show(struct seq_file *m, atomic_long_t *events)
> atomic_long_read(&events[MEMCG_OOM_KILL]));
> seq_printf(m, "oom_group_kill %lu\n",
> atomic_long_read(&events[MEMCG_OOM_GROUP_KILL]));
> + seq_printf(m, "sock_throttled %lu\n",
> + atomic_long_read(&events[MEMCG_SOCK_THROTTLED]));
> }
>
> static int memory_events_show(struct seq_file *m, void *v)
> diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
> index 31ea5af49f2d..2206968fb505 100644
> --- a/net/ipv4/tcp_input.c
> +++ b/net/ipv4/tcp_input.c
> @@ -713,6 +713,7 @@ static void tcp_grow_window(struct sock *sk, const struct sk_buff *skb,
> * Adjust rcv_ssthresh according to reserved mem
> */
> tcp_adjust_rcv_ssthresh(sk);
> + memcg_memory_event(sk->sk_memcg, MEMCG_SOCK_THROTTLED);
> }
> }
>
> @@ -5764,8 +5765,10 @@ static int tcp_prune_queue(struct sock *sk, const struct sk_buff *in_skb)
>
> if (!tcp_can_ingest(sk, in_skb))
> tcp_clamp_window(sk);
> - else if (tcp_under_memory_pressure(sk))
> + else if (tcp_under_memory_pressure(sk)) {
> tcp_adjust_rcv_ssthresh(sk);
> + memcg_memory_event(sk->sk_memcg, MEMCG_SOCK_THROTTLED);
> + }
>
> if (tcp_can_ingest(sk, in_skb))
> return 0;
> diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
> index bb3576ac0ad7..8fe8d973d7ac 100644
> --- a/net/ipv4/tcp_output.c
> +++ b/net/ipv4/tcp_output.c
> @@ -3275,8 +3275,10 @@ u32 __tcp_select_window(struct sock *sk)
> if (free_space < (full_space >> 1)) {
> icsk->icsk_ack.quick = 0;
>
> - if (tcp_under_memory_pressure(sk))
> + if (tcp_under_memory_pressure(sk)) {
> tcp_adjust_rcv_ssthresh(sk);
> + memcg_memory_event(sk->sk_memcg, MEMCG_SOCK_THROTTLED);
> + }
>
> /* free_space might become our new window, make sure we don't
> * increase it due to wscale.
> @@ -3334,8 +3336,10 @@ u32 __tcp_select_window(struct sock *sk)
> if (free_space < (full_space >> 1)) {
> icsk->icsk_ack.quick = 0;
>
> - if (tcp_under_memory_pressure(sk))
> + if (tcp_under_memory_pressure(sk)) {
> tcp_adjust_rcv_ssthresh(sk);
> + memcg_memory_event(sk->sk_memcg, MEMCG_SOCK_THROTTLED);
> + }
>
> /* if free space is too low, return a zero window */
> if (free_space < (allowed_space >> 4) || free_space < mss ||
> --
> 2.47.3
>
Powered by blists - more mailing lists