[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230901062141.51972-2-wuyun.abel@bytedance.com>
Date: Fri, 1 Sep 2023 14:21:26 +0800
From: Abel Wu <wuyun.abel@...edance.com>
To: "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Shakeel Butt <shakeelb@...gle.com>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Michal Hocko <mhocko@...e.com>,
Johannes Weiner <hannes@...xchg.org>,
Yosry Ahmed <yosryahmed@...gle.com>,
Yu Zhao <yuzhao@...gle.com>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
Abel Wu <wuyun.abel@...edance.com>,
Yafang Shao <laoar.shao@...il.com>,
Kefeng Wang <wangkefeng.wang@...wei.com>,
Kuniyuki Iwashima <kuniyu@...zon.com>,
Martin KaFai Lau <martin.lau@...nel.org>,
Breno Leitao <leitao@...ian.org>,
Alexander Mikhalitsyn <alexander@...alicyn.com>,
David Howells <dhowells@...hat.com>,
Jason Xing <kernelxing@...cent.com>
Cc: linux-kernel@...r.kernel.org (open list),
netdev@...r.kernel.org (open list:NETWORKING [GENERAL]),
linux-mm@...ck.org (open list:MEMORY MANAGEMENT)
Subject: [RFC PATCH net-next 1/3] sock: Code cleanup on __sk_mem_raise_allocated()
Code cleanup for both better simplicity and readability.
No functional change intended.
Signed-off-by: Abel Wu <wuyun.abel@...edance.com>
---
net/core/sock.c | 22 ++++++++++++----------
1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/net/core/sock.c b/net/core/sock.c
index 666a17cab4f5..af778fc60a4d 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -3040,17 +3040,19 @@ EXPORT_SYMBOL(sk_wait_data);
*/
int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind)
{
- bool memcg_charge = mem_cgroup_sockets_enabled && sk->sk_memcg;
+ struct mem_cgroup *memcg = mem_cgroup_sockets_enabled ? sk->sk_memcg : NULL;
struct proto *prot = sk->sk_prot;
- bool charged = true;
+ bool charged = false;
long allocated;
sk_memory_allocated_add(sk, amt);
allocated = sk_memory_allocated(sk);
- if (memcg_charge &&
- !(charged = mem_cgroup_charge_skmem(sk->sk_memcg, amt,
- gfp_memcg_charge())))
- goto suppress_allocation;
+
+ if (memcg) {
+ if (!mem_cgroup_charge_skmem(memcg, amt, gfp_memcg_charge()))
+ goto suppress_allocation;
+ charged = true;
+ }
/* Under limit. */
if (allocated <= sk_prot_mem_limits(sk, 0)) {
@@ -3105,8 +3107,8 @@ int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind)
*/
if (sk->sk_wmem_queued + size >= sk->sk_sndbuf) {
/* Force charge with __GFP_NOFAIL */
- if (memcg_charge && !charged) {
- mem_cgroup_charge_skmem(sk->sk_memcg, amt,
+ if (memcg && !charged) {
+ mem_cgroup_charge_skmem(memcg, amt,
gfp_memcg_charge() | __GFP_NOFAIL);
}
return 1;
@@ -3118,8 +3120,8 @@ int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind)
sk_memory_allocated_sub(sk, amt);
- if (memcg_charge && charged)
- mem_cgroup_uncharge_skmem(sk->sk_memcg, amt);
+ if (charged)
+ mem_cgroup_uncharge_skmem(memcg, amt);
return 0;
}
--
2.37.3
Powered by blists - more mailing lists