[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CAEf4BzZqyE9XKePk0pf8U7-3ei17vO8jQiEBH_fTN7vOst+gWg@mail.gmail.com>
Date: Mon, 15 Aug 2022 20:33:33 -0700
From: Andrii Nakryiko <andrii.nakryiko@...il.com>
To: Martin KaFai Lau <kafai@...com>
Cc: bpf@...r.kernel.org, netdev@...r.kernel.org,
Alexei Starovoitov <ast@...nel.org>,
Andrii Nakryiko <andrii@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
David Miller <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, kernel-team@...com,
Paolo Abeni <pabeni@...hat.com>,
Stanislav Fomichev <sdf@...gle.com>
Subject: Re: [PATCH v3 bpf-next 07/15] bpf: Initialize the bpf_run_ctx in bpf_iter_run_prog()
On Wed, Aug 10, 2022 at 12:11 PM Martin KaFai Lau <kafai@...com> wrote:
>
> The bpf-iter-prog for tcp and unix sk can do bpf_setsockopt()
> which needs has_current_bpf_ctx() to decide if it is called by a
> bpf prog. This patch initializes the bpf_run_ctx in
> bpf_iter_run_prog() for the has_current_bpf_ctx() to use.
>
> Signed-off-by: Martin KaFai Lau <kafai@...com>
> ---
> include/linux/bpf.h | 2 +-
> kernel/bpf/bpf_iter.c | 5 +++++
> 2 files changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 0a600b2013cc..15ab980e9525 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -1967,7 +1967,7 @@ static inline bool unprivileged_ebpf_enabled(void)
> }
>
> /* Not all bpf prog type has the bpf_ctx.
> - * Only trampoline and cgroup-bpf have it.
> + * Only trampoline, cgroup-bpf, and iter have it.
Apart from this part which I'd drop, lgtm:
Acked-by: Andrii Nakryiko <andrii@...nel.org>
> * For the bpf prog type that has initialized the bpf_ctx,
> * this function can be used to decide if a kernel function
> * is called by a bpf program.
> diff --git a/kernel/bpf/bpf_iter.c b/kernel/bpf/bpf_iter.c
> index 4b112aa8bba3..6476b2c03527 100644
> --- a/kernel/bpf/bpf_iter.c
> +++ b/kernel/bpf/bpf_iter.c
> @@ -685,19 +685,24 @@ struct bpf_prog *bpf_iter_get_info(struct bpf_iter_meta *meta, bool in_stop)
>
> int bpf_iter_run_prog(struct bpf_prog *prog, void *ctx)
> {
> + struct bpf_run_ctx run_ctx, *old_run_ctx;
> int ret;
>
> if (prog->aux->sleepable) {
> rcu_read_lock_trace();
> migrate_disable();
> might_fault();
> + old_run_ctx = bpf_set_run_ctx(&run_ctx);
> ret = bpf_prog_run(prog, ctx);
> + bpf_reset_run_ctx(old_run_ctx);
> migrate_enable();
> rcu_read_unlock_trace();
> } else {
> rcu_read_lock();
> migrate_disable();
> + old_run_ctx = bpf_set_run_ctx(&run_ctx);
> ret = bpf_prog_run(prog, ctx);
> + bpf_reset_run_ctx(old_run_ctx);
> migrate_enable();
> rcu_read_unlock();
> }
> --
> 2.30.2
>
Powered by blists - more mailing lists