[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180423170921.16162-1-guro@fb.com>
Date: Mon, 23 Apr 2018 18:09:21 +0100
From: Roman Gushchin <guro@...com>
To: <netdev@...r.kernel.org>
CC: <kernel-team@...com>, Roman Gushchin <guro@...com>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>
Subject: [PATCH bpf] bpf: disable and restore preemption in __BPF_PROG_RUN_ARRAY
Running bpf programs requires disabled preemption,
however at least some* of the BPF_PROG_RUN_ARRAY users
do not follow this rule.
To fix this bug, and also to make it not happen in the future,
let's add explicit preemption disabling/re-enabling
to the __BPF_PROG_RUN_ARRAY code.
* for example:
[ 17.624472] RIP: 0010:__cgroup_bpf_run_filter_sk+0x1c4/0x1d0
...
[ 17.640890] inet6_create+0x3eb/0x520
[ 17.641405] __sock_create+0x242/0x340
[ 17.641939] __sys_socket+0x57/0xe0
[ 17.642370] ? trace_hardirqs_off_thunk+0x1a/0x1c
[ 17.642944] SyS_socket+0xa/0x10
[ 17.643357] do_syscall_64+0x79/0x220
[ 17.643879] entry_SYSCALL_64_after_hwframe+0x42/0xb7
Signed-off-by: Roman Gushchin <guro@...com>
Cc: Alexei Starovoitov <ast@...nel.org>
Cc: Daniel Borkmann <daniel@...earbox.net>
---
include/linux/bpf.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 486e65e3db26..dc586cc64bc2 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -351,6 +351,7 @@ int bpf_prog_array_copy(struct bpf_prog_array __rcu *old_array,
struct bpf_prog **_prog, *__prog; \
struct bpf_prog_array *_array; \
u32 _ret = 1; \
+ preempt_disable(); \
rcu_read_lock(); \
_array = rcu_dereference(array); \
if (unlikely(check_non_null && !_array))\
@@ -362,6 +363,7 @@ int bpf_prog_array_copy(struct bpf_prog_array __rcu *old_array,
} \
_out: \
rcu_read_unlock(); \
+ preempt_enable_no_resched(); \
_ret; \
})
--
2.14.3
Powered by blists - more mailing lists