[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230706033447.54696-4-alexei.starovoitov@gmail.com>
Date: Wed, 5 Jul 2023 20:34:36 -0700
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: daniel@...earbox.net,
andrii@...nel.org,
void@...ifault.com,
houtao@...weicloud.com,
paulmck@...nel.org
Cc: tj@...nel.org,
rcu@...r.kernel.org,
netdev@...r.kernel.org,
bpf@...r.kernel.org,
kernel-team@...com
Subject: [PATCH v4 bpf-next 03/14] bpf: Let free_all() return the number of freed elements.
From: Alexei Starovoitov <ast@...nel.org>
Let free_all() helper return the number of freed elements.
It's not used in this patch, but helps in debug/development of bpf_mem_alloc.
For example this diff for __free_rcu():
- free_all(llist_del_all(&c->waiting_for_gp_ttrace), !!c->percpu_size);
+ printk("cpu %d freed %d objs after tasks trace\n", raw_smp_processor_id(),
+ free_all(llist_del_all(&c->waiting_for_gp_ttrace), !!c->percpu_size));
would show how busy RCU tasks trace is.
In artificial benchmark where one cpu is allocating and different cpu is freeing
the RCU tasks trace won't be able to keep up and the list of objects
would keep growing from thousands to millions and eventually OOMing.
Signed-off-by: Alexei Starovoitov <ast@...nel.org>
Acked-by: Hou Tao <houtao1@...wei.com>
---
kernel/bpf/memalloc.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/memalloc.c b/kernel/bpf/memalloc.c
index b0011217be6c..693651d2648b 100644
--- a/kernel/bpf/memalloc.c
+++ b/kernel/bpf/memalloc.c
@@ -223,12 +223,16 @@ static void free_one(void *obj, bool percpu)
kfree(obj);
}
-static void free_all(struct llist_node *llnode, bool percpu)
+static int free_all(struct llist_node *llnode, bool percpu)
{
struct llist_node *pos, *t;
+ int cnt = 0;
- llist_for_each_safe(pos, t, llnode)
+ llist_for_each_safe(pos, t, llnode) {
free_one(pos, percpu);
+ cnt++;
+ }
+ return cnt;
}
static void __free_rcu(struct rcu_head *head)
--
2.34.1
Powered by blists - more mailing lists