[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20251016124610.0fcf17313c649795881db43c@linux-foundation.org>
Date: Thu, 16 Oct 2025 12:46:10 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Shakeel Butt <shakeel.butt@...ux.dev>
Cc: Johannes Weiner <hannes@...xchg.org>, Michal Hocko <mhocko@...nel.org>,
Roman Gushchin <roman.gushchin@...ux.dev>, Muchun Song
<muchun.song@...ux.dev>, Tejun Heo <tj@...nel.org>, Eric Dumazet
<edumazet@...gle.com>, Kuniyuki Iwashima <kuniyu@...gle.com>, Paolo Abeni
<pabeni@...hat.com>, Willem de Bruijn <willemb@...gle.com>, Jakub Kicinski
<kuba@...nel.org>, "David S . Miller" <davem@...emloft.net>, Matyas Hurtik
<matyas.hurtik@...77.com>, Daniel Sedlak <daniel.sedlak@...77.com>, Simon
Horman <horms@...nel.org>, Neal Cardwell <ncardwell@...gle.com>, Wei Wang
<weibunny@...a.com>, netdev@...r.kernel.org, linux-mm@...ck.org,
cgroups@...r.kernel.org, linux-kernel@...r.kernel.org, Meta kernel team
<kernel-team@...a.com>
Subject: Re: [PATCH v2] memcg: net: track network throttling due to memcg
memory pressure
On Thu, 16 Oct 2025 09:10:35 -0700 Shakeel Butt <shakeel.butt@...ux.dev> wrote:
> The kernel can throttle network sockets if the memory cgroup associated
> with the corresponding socket is under memory pressure. The throttling
> actions include clamping the transmit window, failing to expand receive
> or send buffers, aggressively prune out-of-order receive queue, FIN
> deferred to a retransmitted packet and more. Let's add memcg metric to
> indicate track such throttling actions.
>
> At the moment memcg memory pressure is defined through vmpressure and in
> future it may be defined using PSI or we may add more flexible way for
> the users to define memory pressure, maybe through ebpf. However the
> potential throttling actions will remain the same, so this newly
> introduced metric will continue to track throttling actions irrespective
> of how memcg memory pressure is defined.
>
> ...
>
> --- a/include/net/sock.h
> +++ b/include/net/sock.h
> @@ -2635,8 +2635,12 @@ static inline bool mem_cgroup_sk_under_memory_pressure(const struct sock *sk)
> #endif /* CONFIG_MEMCG_V1 */
>
> do {
> - if (time_before64(get_jiffies_64(), mem_cgroup_get_socket_pressure(memcg)))
> + if (time_before64(get_jiffies_64(),
> + mem_cgroup_get_socket_pressure(memcg))) {
> + memcg_memory_event(mem_cgroup_from_sk(sk),
> + MEMCG_SOCK_THROTTLED);
> return true;
> + }
> } while ((memcg = parent_mem_cgroup(memcg)));
>
Totally OT, but that's one bigass inlined function. A quick test
indicates that uninlining just this function reduces the size of
tcp_input.o and tcp_output.o nicely. x86_64 defconfig:
text data bss dec hex filename
52130 1686 0 53816 d238 net/ipv4/tcp_input.o
32335 1221 0 33556 8314 net/ipv4/tcp_output.o
text data bss dec hex filename
51346 1494 0 52840 ce68 net/ipv4/tcp_input.o
31911 1125 0 33036 810c net/ipv4/tcp_output.o
Powered by blists - more mailing lists