[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200216193005.144157-8-jolsa@kernel.org>
Date: Sun, 16 Feb 2020 20:29:54 +0100
From: Jiri Olsa <jolsa@...nel.org>
To: Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>
Cc: netdev@...r.kernel.org, bpf@...r.kernel.org,
Andrii Nakryiko <andriin@...com>, Yonghong Song <yhs@...com>,
Song Liu <songliubraving@...com>,
Martin KaFai Lau <kafai@...com>,
Jakub Kicinski <kuba@...nel.org>,
David Miller <davem@...hat.com>,
Björn Töpel <bjorn.topel@...el.com>,
John Fastabend <john.fastabend@...il.com>,
Jesper Dangaard Brouer <hawk@...nel.org>,
Arnaldo Carvalho de Melo <acme@...hat.com>
Subject: [PATCH 07/18] bpf: Move bpf_tree add/del from bpf_prog_ksym_node_add/del
Moving bpf_tree add/del from bpf_prog_ksym_node_add/del,
because it will be used (and renamed) in following patches
for bpf_ksym objects. The bpf_tree is specific for bpf_prog
objects.
Signed-off-by: Jiri Olsa <jolsa@...nel.org>
---
kernel/bpf/core.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 9fb08b4d01f7..2fc6b28291cf 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -652,7 +652,6 @@ static void bpf_prog_ksym_node_add(struct bpf_prog_aux *aux)
{
WARN_ON_ONCE(!list_empty(&aux->ksym.lnode));
list_add_tail_rcu(&aux->ksym.lnode, &bpf_kallsyms);
- latch_tree_insert(&aux->ksym_tnode, &bpf_tree, &bpf_tree_ops);
latch_tree_insert(&aux->ksym.tnode, &bpf_ksym_tree, &bpf_ksym_tree_ops);
}
@@ -661,7 +660,6 @@ static void bpf_prog_ksym_node_del(struct bpf_prog_aux *aux)
if (list_empty(&aux->ksym.lnode))
return;
- latch_tree_erase(&aux->ksym_tnode, &bpf_tree, &bpf_tree_ops);
latch_tree_erase(&aux->ksym.tnode, &bpf_ksym_tree, &bpf_ksym_tree_ops);
list_del_rcu(&aux->ksym.lnode);
}
@@ -687,6 +685,7 @@ void bpf_prog_kallsyms_add(struct bpf_prog *fp)
bpf_get_prog_name(fp);
spin_lock_bh(&bpf_lock);
+ latch_tree_insert(&fp->aux->ksym_tnode, &bpf_tree, &bpf_tree_ops);
bpf_prog_ksym_node_add(fp->aux);
spin_unlock_bh(&bpf_lock);
}
@@ -697,6 +696,7 @@ void bpf_prog_kallsyms_del(struct bpf_prog *fp)
return;
spin_lock_bh(&bpf_lock);
+ latch_tree_erase(&fp->aux->ksym_tnode, &bpf_tree, &bpf_tree_ops);
bpf_prog_ksym_node_del(fp->aux);
spin_unlock_bh(&bpf_lock);
}
--
2.24.1
Powered by blists - more mailing lists