[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87a75ftkwu.fsf@linux.intel.com>
Date: Tue, 18 Feb 2020 17:39:45 -0800
From: Vinicius Costa Gomes <vinicius.gomes@...el.com>
To: Thomas Gleixner <tglx@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>
Cc: David Miller <davem@...emloft.net>, bpf@...r.kernel.org,
netdev@...r.kernel.org, Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Sebastian Sewior <bigeasy@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Clark Williams <williams@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>,
Juri Lelli <juri.lelli@...hat.com>,
Ingo Molnar <mingo@...nel.org>
Subject: Re: [RFC patch 09/19] bpf: Use BPF_PROG_RUN_PIN_ON_CPU() at simple call sites.
Hi,
Thomas Gleixner <tglx@...utronix.de> writes:
> From: David Miller <davem@...emloft.net>
>
> All of these cases are strictly of the form:
>
> preempt_disable();
> BPF_PROG_RUN(...);
> preempt_enable();
>
> Replace this with BPF_PROG_RUN_PIN_ON_CPU() which wraps BPF_PROG_RUN()
> with:
>
> migrate_disable();
> BPF_PROG_RUN(...);
> migrate_enable();
>
> On non RT enabled kernels this maps to preempt_disable/enable() and on RT
> enabled kernels this solely prevents migration, which is sufficient as
> there is no requirement to prevent reentrancy to any BPF program from a
> preempting task. The only requirement is that the program stays on the same
> CPU.
>
> Therefore, this is a trivially correct transformation.
>
> [ tglx: Converted to BPF_PROG_RUN_PIN_ON_CPU() ]
>
> Signed-off-by: David S. Miller <davem@...emloft.net>
> Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
>
> ---
> include/linux/filter.h | 4 +---
> kernel/seccomp.c | 4 +---
> net/core/flow_dissector.c | 4 +---
> net/core/skmsg.c | 8 ++------
> net/kcm/kcmsock.c | 4 +---
> 5 files changed, 6 insertions(+), 18 deletions(-)
>
> --- a/include/linux/filter.h
> +++ b/include/linux/filter.h
> @@ -713,9 +713,7 @@ static inline u32 bpf_prog_run_clear_cb(
> if (unlikely(prog->cb_access))
> memset(cb_data, 0, BPF_SKB_CB_LEN);
>
> - preempt_disable();
> - res = BPF_PROG_RUN(prog, skb);
> - preempt_enable();
> + res = BPF_PROG_RUN_PIN_ON_CPU(prog, skb);
> return res;
> }
>
> --- a/kernel/seccomp.c
> +++ b/kernel/seccomp.c
> @@ -268,16 +268,14 @@ static u32 seccomp_run_filters(const str
> * All filters in the list are evaluated and the lowest BPF return
> * value always takes priority (ignoring the DATA).
> */
> - preempt_disable();
> for (; f; f = f->prev) {
> - u32 cur_ret = BPF_PROG_RUN(f->prog, sd);
> + u32 cur_ret = BPF_PROG_RUN_PIN_ON_CPU(f->prog, sd);
>
More a question really, isn't the behavior changing here? i.e. shouldn't
migrate_disable()/migrate_enable() be moved to outside the loop? Or is
running seccomp filters on different cpus not a problem?
--
Vinicius
Powered by blists - more mailing lists