[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <5f954e14-3dec-5e6e-56f0-aa7b32253c53@iogearbox.net>
Date: Mon, 25 Feb 2019 22:36:05 +0100
From: Daniel Borkmann <daniel@...earbox.net>
To: Stanislav Fomichev <sdf@...gle.com>, netdev@...r.kernel.org
Cc: davem@...emloft.net, ast@...nel.org,
syzbot <syzkaller@...glegroups.com>
Subject: Re: [PATCH bpf-next] bpf/test_run: fix unkillable BPF_PROG_TEST_RUN
for flow dissector
On 02/19/2019 07:54 PM, Stanislav Fomichev wrote:
> Syzbot found out that running BPF_PROG_TEST_RUN with repeat=0xffffffff
> makes process unkillable. The problem is that when CONFIG_PREEMPT is
> enabled, we never see need_resched() return true. This is due to the
> fact that preempt_enable() (which we do in bpf_test_run_one on each
> iteration) now handles resched if it's needed.
>
> Let's disable preemption for the whole run, not per test. In this case
> we can properly see whether resched is needed.
> Let's also properly return -EINTR to the userspace in case of a signal
> interrupt.
>
> This is a follow up for a recently fixed issue in bpf_test_run, see
> commit df1a2cb7c74b ("bpf/test_run: fix unkillable
> BPF_PROG_TEST_RUN").
>
> Reported-by: syzbot <syzkaller@...glegroups.com>
> Signed-off-by: Stanislav Fomichev <sdf@...gle.com>
Applied, thanks! Would be nice to consolidate at least some of these bits
so it's not duplicated, perhaps __always_inline might help for function args
in avoiding indirect calls (iirc, perf's rb handling uses this heavily).
> ---
> net/bpf/test_run.c | 26 ++++++++++++++++++++------
> 1 file changed, 20 insertions(+), 6 deletions(-)
>
> diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
> index 2c5172b33209..619655db8d9e 100644
> --- a/net/bpf/test_run.c
> +++ b/net/bpf/test_run.c
> @@ -293,31 +293,45 @@ int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
> if (!repeat)
> repeat = 1;
>
> + rcu_read_lock();
> + preempt_disable();
> time_start = ktime_get_ns();
> for (i = 0; i < repeat; i++) {
> - preempt_disable();
> - rcu_read_lock();
> retval = __skb_flow_bpf_dissect(prog, skb,
> &flow_keys_dissector,
> &flow_keys);
> - rcu_read_unlock();
> - preempt_enable();
> +
> + if (signal_pending(current)) {
> + preempt_enable();
> + rcu_read_unlock();
> +
> + ret = -EINTR;
> + goto out;
> + }
>
> if (need_resched()) {
> - if (signal_pending(current))
> - break;
> time_spent += ktime_get_ns() - time_start;
> + preempt_enable();
> + rcu_read_unlock();
> +
> cond_resched();
> +
> + rcu_read_lock();
> + preempt_disable();
> time_start = ktime_get_ns();
> }
> }
> time_spent += ktime_get_ns() - time_start;
> + preempt_enable();
> + rcu_read_unlock();
> +
> do_div(time_spent, repeat);
> duration = time_spent > U32_MAX ? U32_MAX : (u32)time_spent;
>
> ret = bpf_test_finish(kattr, uattr, &flow_keys, sizeof(flow_keys),
> retval, duration);
>
> +out:
> kfree_skb(skb);
> kfree(sk);
> return ret;
>
Powered by blists - more mailing lists