[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230602081135.75424-1-wuyun.abel@bytedance.com>
Date: Fri, 2 Jun 2023 16:11:32 +0800
From: Abel Wu <wuyun.abel@...edance.com>
To: "David S . Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Vladimir Davydov <vdavydov.dev@...il.com>,
Shakeel Butt <shakeelb@...gle.com>,
Muchun Song <muchun.song@...ux.dev>
Cc: Simon Horman <simon.horman@...igine.com>, netdev@...r.kernel.org,
linux-mm@...ck.org, cgroups@...r.kernel.org,
linux-kernel@...r.kernel.org, Abel Wu <wuyun.abel@...edance.com>
Subject: [PATCH net-next v5 0/3] sock: Improve condition on sockmem pressure
Currently the memcg's status is also accounted into the socket's
memory pressure to alleviate the memcg's memstall. But there are
still cases that can be improved. Please check the patches for
detailed info.
Tested on Intel Xeon(R) Platinum 8260, a dual socket machine
containing 2 NUMA nodes each of which has 24C/48T. All the benchmarks
are done inside a separate 5-level depth memcg in a clean host.
Below shows the result of tbench4 and netperf:
tbench4 Throughput (misleading but traditional)
baseline patchset
Hmean 1 357.14 ( 0.00%) 360.31 * 0.89%*
Hmean 2 716.66 ( 0.00%) 724.57 * 1.10%*
Hmean 4 1408.82 ( 0.00%) 1424.31 * 1.10%*
Hmean 8 2826.02 ( 0.00%) 2832.64 * 0.23%*
Hmean 16 5413.68 ( 0.00%) 5347.72 * -1.22%*
Hmean 32 8692.74 ( 0.00%) 8684.26 ( -0.10%)
Hmean 64 10180.12 ( 0.00%) 10377.41 * 1.94%*
Hmean 128 22905.53 ( 0.00%) 22959.73 * 0.24%*
Hmean 256 22935.78 ( 0.00%) 23103.81 * 0.73%*
Hmean 384 22605.36 ( 0.00%) 22747.53 * 0.63%*
netperf-udp
baseline patchset
Hmean send-64 278.42 ( 0.00%) 277.05 ( -0.49%)
Hmean send-128 552.18 ( 0.00%) 553.51 ( 0.24%)
Hmean send-256 1096.38 ( 0.00%) 1095.84 ( -0.05%)
Hmean send-1024 4102.79 ( 0.00%) 4086.06 ( -0.41%)
Hmean send-2048 7727.20 ( 0.00%) 7769.95 ( 0.55%)
Hmean send-3312 11927.57 ( 0.00%) 11966.36 ( 0.33%)
Hmean send-4096 14218.54 ( 0.00%) 14193.51 ( -0.18%)
Hmean send-8192 23903.60 ( 0.00%) 24205.35 * 1.26%*
Hmean send-16384 39600.11 ( 0.00%) 39372.47 ( -0.57%)
Hmean recv-64 278.42 ( 0.00%) 277.05 ( -0.49%)
Hmean recv-128 552.18 ( 0.00%) 553.51 ( 0.24%)
Hmean recv-256 1096.38 ( 0.00%) 1095.84 ( -0.05%)
Hmean recv-1024 4102.79 ( 0.00%) 4086.06 ( -0.41%)
Hmean recv-2048 7727.19 ( 0.00%) 7769.94 ( 0.55%)
Hmean recv-3312 11927.57 ( 0.00%) 11966.36 ( 0.33%)
Hmean recv-4096 14218.45 ( 0.00%) 14193.50 ( -0.18%)
Hmean recv-8192 23903.45 ( 0.00%) 24205.21 * 1.26%*
Hmean recv-16384 39599.53 ( 0.00%) 39372.28 ( -0.57%)
netperf-tcp
baseline patchset
Hmean 64 1756.32 ( 0.00%) 1808.43 * 2.97%*
Hmean 128 3393.47 ( 0.00%) 3421.99 * 0.84%*
Hmean 256 6464.04 ( 0.00%) 6459.72 ( -0.07%)
Hmean 1024 19050.99 ( 0.00%) 19036.21 ( -0.08%)
Hmean 2048 26107.88 ( 0.00%) 26185.44 ( 0.30%)
Hmean 3312 30770.77 ( 0.00%) 30834.78 ( 0.21%)
Hmean 4096 32523.50 ( 0.00%) 32609.77 ( 0.27%)
Hmean 8192 40180.74 ( 0.00%) 39632.41 * -1.36%*
Hmean 16384 46117.02 ( 0.00%) 46259.69 ( 0.31%)
Seems no obvious regression.
v5:
- As Paolo pointed out, the cleanup paired with the patch that
removed in v4 should also be removed.
v4:
- Per Shakeel's suggestion, removed the patch that suppresses
allocation under net-memcg pressure to avoid further keeping
the senders waiting if SACKed segments get dropped from the
OFO queue.
v3:
- Fixed some coding style issues pointed out by Simon
- Fold dependency into memcg pressure func to improve readability
v2:
- Splited into several patches and modified commit log for
better readability.
- Make memcg's pressure consideration function-wide in
__sk_mem_raise_allocated().
v1: https://lore.kernel.org/lkml/20230506085903.96133-1-wuyun.abel@bytedance.com/
v2: https://lore.kernel.org/lkml/20230522070122.6727-1-wuyun.abel@bytedance.com/
v3: https://lore.kernel.org/lkml/20230523094652.49411-1-wuyun.abel@bytedance.com/
v4: https://lore.kernel.org/lkml/20230530114011.13368-1-wuyun.abel@bytedance.com/
Abel Wu (3):
net-memcg: Fold dependency into memcg pressure cond
sock: Always take memcg pressure into consideration
sock: Fix misuse of sk_under_memory_pressure()
include/linux/memcontrol.h | 2 ++
include/net/sock.h | 14 ++++++++------
include/net/tcp.h | 3 +--
net/core/sock.c | 2 +-
4 files changed, 12 insertions(+), 9 deletions(-)
--
2.37.3
Powered by blists - more mailing lists