[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230920132545.56834-2-wuyun.abel@bytedance.com>
Date: Wed, 20 Sep 2023 21:25:41 +0800
From: Abel Wu <wuyun.abel@...edance.com>
To: Shakeel Butt <shakeelb@...gle.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Kuniyuki Iwashima <kuniyu@...zon.com>,
Abel Wu <wuyun.abel@...edance.com>,
Breno Leitao <leitao@...ian.org>,
Alexander Mikhalitsyn <alexander@...alicyn.com>,
David Howells <dhowells@...hat.com>,
Jason Xing <kernelxing@...cent.com>,
Xin Long <lucien.xin@...il.com>,
Glauber Costa <glommer@...allels.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujtsu.com>
Cc: netdev@...r.kernel.org (open list:NETWORKING [GENERAL]),
linux-kernel@...r.kernel.org (open list)
Subject: [PATCH net-next 2/2] sock: Fix improper heuristic on raising memory
Before sockets became aware of net-memcg's memory pressure since
commit e1aab161e013 ("socket: initial cgroup code."), the memory
usage would be granted to raise if below average even when under
protocol's pressure. This provides fairness among the sockets of
same protocol.
That commit changes this because the heuristic will also be
effective when only memcg is under pressure which makes no sense.
Fix this by skipping this heuristic when under memcg pressure.
Fixes: e1aab161e013 ("socket: initial cgroup code.")
Signed-off-by: Abel Wu <wuyun.abel@...edance.com>
---
net/core/sock.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/net/core/sock.c b/net/core/sock.c
index 379eb8b65562..ef5cf6250f17 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -3093,8 +3093,16 @@ int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind)
if (sk_has_memory_pressure(sk)) {
u64 alloc;
- if (!sk_under_memory_pressure(sk))
+ if (memcg && mem_cgroup_under_socket_pressure(memcg))
+ goto suppress_allocation;
+
+ if (!sk_under_global_memory_pressure(sk))
return 1;
+
+ /* Trying to be fair among all the sockets under the
+ * protocol's memory pressure, by allowing the ones
+ * that below average usage to raise.
+ */
alloc = sk_sockets_allocated_read_positive(sk);
if (sk_prot_mem_limits(sk, 2) > alloc *
sk_mem_pages(sk->sk_wmem_queued +
--
2.37.3
Powered by blists - more mailing lists