[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210929235910.1765396-10-jevburton.kernel@gmail.com>
Date: Wed, 29 Sep 2021 23:59:06 +0000
From: Joe Burton <jevburton.kernel@...il.com>
To: Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>,
Martin KaFai Lau <kafai@...com>
Cc: Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
John Fastabend <john.fastabend@...il.com>,
KP Singh <kpsingh@...nel.org>,
Petar Penkov <ppenkov@...gle.com>,
Stanislav Fomichev <sdf@...gle.com>,
Hao Luo <haoluo@...gle.com>, netdev@...r.kernel.org,
bpf@...r.kernel.org, Joe Burton <jevburton@...gle.com>
Subject: [RFC PATCH v2 09/13] bpf: Add infinite loop check on map tracers
From: Joe Burton <jevburton@...gle.com>
Prevent programs from being attached to a map if that attachment could
cause an infinite loop. A simple example: a program updates the same
map that it's tracing. A map update would cause the program to run,
which would cause another update. A more complex example: an update to
map M0 triggers tracer P0. P0 updates map M1. M1 is being traced by
tracer T1. T1 updates M0.
We prevent this situation by enforcing that all programs "reachable"
from a given map do not include the proposed tracer.
Signed-off-by: Joe Burton <jevburton@...gle.com>
---
kernel/bpf/map_trace.c | 46 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 46 insertions(+)
diff --git a/kernel/bpf/map_trace.c b/kernel/bpf/map_trace.c
index d7c52e197482..80ceda8b1e62 100644
--- a/kernel/bpf/map_trace.c
+++ b/kernel/bpf/map_trace.c
@@ -148,6 +148,48 @@ static const struct bpf_link_ops bpf_map_trace_link_ops = {
.update_prog = bpf_map_trace_link_replace,
};
+/* Determine whether attaching "prog" to "map" would create an infinite loop.
+ * If "prog" updates "map", then running "prog" again on a map update would
+ * loop.
+ */
+static int bpf_map_trace_would_loop(struct bpf_prog *prog,
+ struct bpf_map *map)
+{
+ struct bpf_map_trace_prog *item;
+ struct bpf_prog_aux *aux;
+ struct bpf_map *aux_map;
+ int i, j, err = 0;
+
+ aux = prog->aux;
+ if (!aux)
+ return 0;
+ mutex_lock(&aux->used_maps_mutex);
+ for (i = 0; i < aux->used_map_cnt && !err; i++) {
+ aux_map = aux->used_maps[i];
+ if (aux_map == map) {
+ err = -EINVAL;
+ break;
+ }
+ for (j = 0; j < MAX_BPF_MAP_TRACE_TYPE && !err; j++) {
+ if (!aux_map->trace_progs)
+ continue;
+ rcu_read_lock();
+ list_for_each_entry_rcu(item,
+ &aux_map->trace_progs->progs[i].list,
+ list) {
+ err = bpf_map_trace_would_loop(
+ item->prog, map);
+ if (err)
+ break;
+ }
+ rcu_read_unlock();
+ }
+ }
+ mutex_unlock(&prog->aux->used_maps_mutex);
+ return err;
+}
+
+
int bpf_map_attach_trace(struct bpf_prog *prog,
struct bpf_map *map,
struct bpf_map_trace_link_info *linfo)
@@ -180,6 +222,10 @@ int bpf_map_attach_trace(struct bpf_prog *prog,
goto put_map;
}
+ err = bpf_map_trace_would_loop(prog, map);
+ if (err)
+ goto put_map;
+
trace_prog = kmalloc(sizeof(*trace_prog), GFP_KERNEL);
if (!trace_prog) {
err = -ENOMEM;
--
2.33.0.685.g46640cef36-goog
Powered by blists - more mailing lists