[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190220230135.9748-1-daniel@iogearbox.net>
Date: Thu, 21 Feb 2019 00:01:35 +0100
From: Daniel Borkmann <daniel@...earbox.net>
To: ast@...nel.org
Cc: keescook@...omium.org, netdev@...r.kernel.org,
Daniel Borkmann <daniel@...earbox.net>
Subject: [PATCH bpf-next v2] bpf, seccomp: fix false positive preemption splat for cbpf->ebpf progs
In 568f196756ad ("bpf: check that BPF programs run with preemption disabled")
a check was added for BPF_PROG_RUN() that for every invocation preemption is
disabled to not break eBPF assumptions (e.g. per-cpu map). Of course this does
not count for seccomp because only cBPF -> eBPF is loaded here and it does
not make use of any functionality that would require this assertion. Fix this
false positive by adding and using SECCOMP_RUN() variant that does not have
the cant_sleep(); check.
Fixes: 568f196756ad ("bpf: check that BPF programs run with preemption disabled")
Reported-by: syzbot+8bf19ee2aa580de7a2a7@...kaller.appspotmail.com
Signed-off-by: Daniel Borkmann <daniel@...earbox.net>
Acked-by: Kees Cook <keescook@...omium.org>
---
v1 -> v2:
- More elaborate comment and added SECCOMP_RUN
- Added Kees' ACK from earlier v1 patch
include/linux/filter.h | 22 +++++++++++++++++++++-
kernel/seccomp.c | 2 +-
2 files changed, 22 insertions(+), 2 deletions(-)
diff --git a/include/linux/filter.h b/include/linux/filter.h
index f32b3ec..cd7f957 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -533,7 +533,27 @@ struct sk_filter {
struct bpf_prog *prog;
};
-#define BPF_PROG_RUN(filter, ctx) ({ cant_sleep(); (*(filter)->bpf_func)(ctx, (filter)->insnsi); })
+#define __bpf_prog_run(prog, ctx) \
+ (*(prog)->bpf_func)(ctx, (prog)->insnsi)
+#define __bpf_prog_run__may_preempt(prog, ctx) \
+ ({ __bpf_prog_run(prog, ctx); })
+#define __bpf_prog_run__non_preempt(prog, ctx) \
+ ({ cant_sleep(); __bpf_prog_run(prog, ctx); })
+
+/* Preemption must be disabled when native eBPF programs are run in
+ * order to not break per CPU data structures, for example; make
+ * sure to throw a stack trace under CONFIG_DEBUG_ATOMIC_SLEEP when
+ * we find that preemption is still enabled.
+ *
+ * Only exception today is seccomp, where progs have transitioned
+ * from cBPF to eBPF, and native eBPF is _not_ supported. They can
+ * safely run with preemption enabled.
+ */
+#define BPF_PROG_RUN(prog, ctx) \
+ __bpf_prog_run__non_preempt(prog, ctx)
+
+#define SECCOMP_RUN(prog, ctx) \
+ __bpf_prog_run__may_preempt(prog, ctx)
#define BPF_SKB_CB_LEN QDISC_CB_PRIV_LEN
diff --git a/kernel/seccomp.c b/kernel/seccomp.c
index e815781..701a3cf 100644
--- a/kernel/seccomp.c
+++ b/kernel/seccomp.c
@@ -268,7 +268,7 @@ static u32 seccomp_run_filters(const struct seccomp_data *sd,
* value always takes priority (ignoring the DATA).
*/
for (; f; f = f->prev) {
- u32 cur_ret = BPF_PROG_RUN(f->prog, sd);
+ u32 cur_ret = SECCOMP_RUN(f->prog, sd);
if (ACTION_ONLY(cur_ret) < ACTION_ONLY(ret)) {
ret = cur_ret;
--
2.9.5
Powered by blists - more mailing lists