[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <as5cdsm4lraxupg3t6onep2ixql72za25hvd4x334dsoyo4apr@zyzl4vkuevuv>
Date: Fri, 25 Apr 2025 13:18:36 -0700
From: Shakeel Butt <shakeel.butt@...ux.dev>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Johannes Weiner <hannes@...xchg.org>, Michal Hocko <mhocko@...nel.org>,
Roman Gushchin <roman.gushchin@...ux.dev>, Muchun Song <muchun.song@...ux.dev>,
Vlastimil Babka <vbabka@...e.cz>, Jakub Kicinski <kuba@...nel.org>,
Eric Dumazet <edumazet@...gle.com>, Soheil Hassas Yeganeh <soheil@...gle.com>, linux-mm@...ck.org,
cgroups@...r.kernel.org, netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
Meta kernel team <kernel-team@...a.com>
Subject: Re: [PATCH] memcg: multi-memcg percpu charge cache
Hi Andrew,
Another fix for this patch. Basically simplification of refill_stock and
avoiding multiple cached entries of a memcg.
>From 6f6f7736799ad8ca5fee48eca7b7038f6c9bb5b9 Mon Sep 17 00:00:00 2001
From: Shakeel Butt <shakeel.butt@...ux.dev>
Date: Fri, 25 Apr 2025 13:10:43 -0700
Subject: [PATCH] memcg: multi-memcg percpu charge cache - fix 2
Simplify refill_stock by avoiding goto and doing the operations inline
and make sure the given memcg is not cached multiple times.
Signed-off-by: Shakeel Butt <shakeel.butt@...ux.dev>
---
mm/memcontrol.c | 27 +++++++++++++++------------
1 file changed, 15 insertions(+), 12 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 997e2da5d2ca..9dfdbb2fcccc 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1907,7 +1907,8 @@ static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
struct mem_cgroup *cached;
uint8_t stock_pages;
unsigned long flags;
- bool evict = true;
+ bool success = false;
+ int empty_slot = -1;
int i;
/*
@@ -1931,26 +1932,28 @@ static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
stock = this_cpu_ptr(&memcg_stock);
for (i = 0; i < NR_MEMCG_STOCK; ++i) {
-again:
cached = READ_ONCE(stock->cached[i]);
- if (!cached) {
- css_get(&memcg->css);
- WRITE_ONCE(stock->cached[i], memcg);
- }
- if (!cached || memcg == READ_ONCE(stock->cached[i])) {
+ if (!cached && empty_slot == -1)
+ empty_slot = i;
+ if (memcg == READ_ONCE(stock->cached[i])) {
stock_pages = READ_ONCE(stock->nr_pages[i]) + nr_pages;
WRITE_ONCE(stock->nr_pages[i], stock_pages);
if (stock_pages > MEMCG_CHARGE_BATCH)
drain_stock(stock, i);
- evict = false;
+ success = true;
break;
}
}
- if (evict) {
- i = get_random_u32_below(NR_MEMCG_STOCK);
- drain_stock(stock, i);
- goto again;
+ if (!success) {
+ i = empty_slot;
+ if (i == -1) {
+ i = get_random_u32_below(NR_MEMCG_STOCK);
+ drain_stock(stock, i);
+ }
+ css_get(&memcg->css);
+ WRITE_ONCE(stock->cached[i], memcg);
+ WRITE_ONCE(stock->nr_pages[i], stock_pages);
}
local_unlock_irqrestore(&memcg_stock.stock_lock, flags);
--
2.47.1
Powered by blists - more mailing lists