[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 12 Dec 2022 10:03:31 -0800
From: Yonghong Song <yhs@...a.com>
To: david.keisarschm@...l.huji.ac.il,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
John Fastabend <john.fastabend@...il.com>,
Andrii Nakryiko <andrii@...nel.org>,
Martin KaFai Lau <martin.lau@...ux.dev>,
Song Liu <song@...nel.org>, Yonghong Song <yhs@...com>,
KP Singh <kpsingh@...nel.org>,
Stanislav Fomichev <sdf@...gle.com>,
Hao Luo <haoluo@...gle.com>, Jiri Olsa <jolsa@...nel.org>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>
Cc: aksecurity@...il.com, ilay.bahat1@...il.com, bpf@...r.kernel.org,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: [PATCH 2/5] Replace invocation of weak PRNG in kernel/bpf/core.c
On 12/11/22 2:16 PM, david.keisarschm@...l.huji.ac.il wrote:
> From: David <david.keisarschm@...l.huji.ac.il>
>
> We changed the invocation of
> prandom_u32_state to get_random_u32.
> We deleted the maintained state,
> which was a CPU-variable,
> since get_random_u32 maintains its own CPU-variable.
> We also deleted the state initializer,
> since it is not needed anymore.
>
> Signed-off-by: David <david.keisarschm@...l.huji.ac.il>
> ---
> include/linux/bpf.h | 1 -
> kernel/bpf/core.c | 13 +------------
> kernel/bpf/verifier.c | 2 --
> net/core/filter.c | 1 -
> 4 files changed, 1 insertion(+), 16 deletions(-)
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index c1bd1bd10..0689520b9 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -2572,7 +2572,6 @@ const struct bpf_func_proto *tracing_prog_func_proto(
> enum bpf_func_id func_id, const struct bpf_prog *prog);
>
> /* Shared helpers among cBPF and eBPF. */
> -void bpf_user_rnd_init_once(void);
> u64 bpf_user_rnd_u32(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5);
> u64 bpf_get_raw_cpu_id(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5);
>
> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> index 4cb5421d9..a6f06894e 100644
> --- a/kernel/bpf/core.c
> +++ b/kernel/bpf/core.c
> @@ -2579,13 +2579,6 @@ void bpf_prog_free(struct bpf_prog *fp)
> }
> EXPORT_SYMBOL_GPL(bpf_prog_free);
>
> -/* RNG for unpriviledged user space with separated state from prandom_u32(). */
> -static DEFINE_PER_CPU(struct rnd_state, bpf_user_rnd_state);
> -
> -void bpf_user_rnd_init_once(void)
> -{
> - prandom_init_once(&bpf_user_rnd_state);
> -}
>
> BPF_CALL_0(bpf_user_rnd_u32)
> {
> @@ -2595,12 +2588,8 @@ BPF_CALL_0(bpf_user_rnd_u32)
> * transformations. Register assignments from both sides are
> * different, f.e. classic always sets fn(ctx, A, X) here.
> */
> - struct rnd_state *state;
> u32 res;
> -
> - state = &get_cpu_var(bpf_user_rnd_state);
> - res = predictable_rng_prandom_u32_state(state);
> - put_cpu_var(bpf_user_rnd_state);
> + res = get_random_u32();
>
> return res;
> }
Please see the discussion here.
https://lore.kernel.org/bpf/87edtctz8t.fsf@toke.dk/
There is a performance concern with the above change.
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 264b3dc71..9f22fb3fa 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -14049,8 +14049,6 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
>
> if (insn->imm == BPF_FUNC_get_route_realm)
> prog->dst_needed = 1;
> - if (insn->imm == BPF_FUNC_get_prandom_u32)
> - bpf_user_rnd_init_once();
> if (insn->imm == BPF_FUNC_override_return)
> prog->kprobe_override = 1;
> if (insn->imm == BPF_FUNC_tail_call) {
> diff --git a/net/core/filter.c b/net/core/filter.c
> index bb0136e7a..7a595ac00 100644
> --- a/net/core/filter.c
> +++ b/net/core/filter.c
> @@ -443,7 +443,6 @@ static bool convert_bpf_extensions(struct sock_filter *fp,
> break;
> case SKF_AD_OFF + SKF_AD_RANDOM:
> *insn = BPF_EMIT_CALL(bpf_user_rnd_u32);
> - bpf_user_rnd_init_once();
> break;
> }
> break;
Powered by blists - more mailing lists