[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250506225533.2580386-2-shakeel.butt@linux.dev>
Date: Tue, 6 May 2025 15:55:30 -0700
From: Shakeel Butt <shakeel.butt@...ux.dev>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Muchun Song <muchun.song@...ux.dev>,
Alexei Starovoitov <ast@...nel.org>,
Vlastimil Babka <vbabka@...e.cz>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Jakub Kicinski <kuba@...nel.org>,
Eric Dumazet <edumazet@...gle.com>,
linux-mm@...ck.org,
cgroups@...r.kernel.org,
linux-kernel@...r.kernel.org,
bpf@...r.kernel.org,
netdev@...r.kernel.org,
Meta kernel team <kernel-team@...a.com>
Subject: [PATCH 1/4] memcg: simplify consume_stock
The consume_stock() does not need to check gfp_mask for spinning and can
simply trylock the local lock to decide to proceed or fail. No need to
spin at all for local lock.
One of the concern raised was that on PREEMPT_RT kernels, this trylock
can fail more often due to tasks having lock_lock can be preempted. This
can potentially cause the task which have preempted the task having the
local_lock to take the slow path of memcg charging.
However this behavior will only impact the performance if memcg charging
slowpath is worse than two context switches and possibly scheduling
delay behavior of current code. From the network intensive workload
experiment it does not seem like the case.
We ran varying number of netperf clients in different cgroups on a 72 CPU
machine for PREEMPT_RT config.
$ netserver -6
$ netperf -6 -H ::1 -l 60 -t TCP_SENDFILE -- -m 10K
number of clients | Without series | With series
6 | 38559.1 Mbps | 38652.6 Mbps
12 | 37388.8 Mbps | 37560.1 Mbps
18 | 30707.5 Mbps | 31378.3 Mbps
24 | 25908.4 Mbps | 26423.9 Mbps
30 | 22347.7 Mbps | 22326.5 Mbps
36 | 20235.1 Mbps | 20165.0 Mbps
We don't see any significant performance difference for the network
intensive workload with this series.
Signed-off-by: Shakeel Butt <shakeel.butt@...ux.dev>
---
mm/memcontrol.c | 20 +++++++-------------
1 file changed, 7 insertions(+), 13 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index c44124ea3d08..7561e12ca0e0 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1808,16 +1808,14 @@ static bool obj_stock_flush_required(struct memcg_stock_pcp *stock,
* consume_stock: Try to consume stocked charge on this cpu.
* @memcg: memcg to consume from.
* @nr_pages: how many pages to charge.
- * @gfp_mask: allocation mask.
*
- * The charges will only happen if @memcg matches the current cpu's memcg
- * stock, and at least @nr_pages are available in that stock. Failure to
- * service an allocation will refill the stock.
+ * Consume the cached charge if enough nr_pages are present otherwise return
+ * failure. Also return failure for charge request larger than
+ * MEMCG_CHARGE_BATCH or if the local lock is already taken.
*
* returns true if successful, false otherwise.
*/
-static bool consume_stock(struct mem_cgroup *memcg, unsigned int nr_pages,
- gfp_t gfp_mask)
+static bool consume_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
{
struct memcg_stock_pcp *stock;
uint8_t stock_pages;
@@ -1825,12 +1823,8 @@ static bool consume_stock(struct mem_cgroup *memcg, unsigned int nr_pages,
bool ret = false;
int i;
- if (nr_pages > MEMCG_CHARGE_BATCH)
- return ret;
-
- if (gfpflags_allow_spinning(gfp_mask))
- local_lock_irqsave(&memcg_stock.stock_lock, flags);
- else if (!local_trylock_irqsave(&memcg_stock.stock_lock, flags))
+ if (nr_pages > MEMCG_CHARGE_BATCH ||
+ !local_trylock_irqsave(&memcg_stock.stock_lock, flags))
return ret;
stock = this_cpu_ptr(&memcg_stock);
@@ -2333,7 +2327,7 @@ static int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask,
unsigned long pflags;
retry:
- if (consume_stock(memcg, nr_pages, gfp_mask))
+ if (consume_stock(memcg, nr_pages))
return 0;
if (!gfpflags_allow_spinning(gfp_mask))
--
2.47.1
Powered by blists - more mailing lists