lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEf4BzaSMB6oKBO2VXvz4cVE9wXqYq+vyD=EOe3YJ3a-L==WCg@mail.gmail.com>
Date:   Thu, 16 Mar 2023 13:34:06 -0700
From:   Andrii Nakryiko <andrii.nakryiko@...il.com>
To:     Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc:     davem@...emloft.net, daniel@...earbox.net, andrii@...nel.org,
        martin.lau@...nel.org, void@...ifault.com, davemarchevsky@...a.com,
        tj@...nel.org, memxor@...il.com, netdev@...r.kernel.org,
        bpf@...r.kernel.org, kernel-team@...com
Subject: Re: [PATCH bpf-next 1/2] bpf: Allow ld_imm64 instruction to point to kfunc.

On Wed, Mar 15, 2023 at 3:36 PM Alexei Starovoitov
<alexei.starovoitov@...il.com> wrote:
>
> From: Alexei Starovoitov <ast@...nel.org>
>
> Allow ld_imm64 insn with BPF_PSEUDO_BTF_ID to hold the address of kfunc.
> PTR_MEM is already recognized for NULL-ness by is_branch_taken(),
> so unresolved kfuncs will be seen as zero.
> This allows BPF programs to detect at load time whether kfunc is present
> in the kernel with bpf_kfunc_exists() macro.
>
> Signed-off-by: Alexei Starovoitov <ast@...nel.org>
> ---
>  kernel/bpf/verifier.c       | 7 +++++--
>  tools/lib/bpf/bpf_helpers.h | 3 +++
>  2 files changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 60793f793ca6..4e49d34d8cd6 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -15955,8 +15955,8 @@ static int check_pseudo_btf_id(struct bpf_verifier_env *env,
>                 goto err_put;
>         }
>
> -       if (!btf_type_is_var(t)) {
> -               verbose(env, "pseudo btf_id %d in ldimm64 isn't KIND_VAR.\n", id);
> +       if (!btf_type_is_var(t) && !btf_type_is_func(t)) {
> +               verbose(env, "pseudo btf_id %d in ldimm64 isn't KIND_VAR or KIND_FUNC\n", id);
>                 err = -EINVAL;
>                 goto err_put;
>         }
> @@ -15990,6 +15990,9 @@ static int check_pseudo_btf_id(struct bpf_verifier_env *env,
>                 aux->btf_var.reg_type = PTR_TO_BTF_ID | MEM_PERCPU;
>                 aux->btf_var.btf = btf;
>                 aux->btf_var.btf_id = type;
> +       } else if (!btf_type_is_func(t)) {
> +               aux->btf_var.reg_type = PTR_TO_MEM | MEM_RDONLY;
> +               aux->btf_var.mem_size = 0;
>         } else if (!btf_type_is_struct(t)) {
>                 const struct btf_type *ret;
>                 const char *tname;
> diff --git a/tools/lib/bpf/bpf_helpers.h b/tools/lib/bpf/bpf_helpers.h
> index 7d12d3e620cc..43abe4c29409 100644
> --- a/tools/lib/bpf/bpf_helpers.h
> +++ b/tools/lib/bpf/bpf_helpers.h
> @@ -177,6 +177,9 @@ enum libbpf_tristate {
>  #define __kptr_untrusted __attribute__((btf_type_tag("kptr_untrusted")))
>  #define __kptr __attribute__((btf_type_tag("kptr")))
>
> +/* pass function pointer through asm otherwise compiler assumes that any function != 0 */
> +#define bpf_kfunc_exists(fn) ({ void *__p = fn; asm ("" : "+r"(__p)); __p; })
> +

I think we shouldn't add this helper macro. It just obfuscates a
misuse of libbpf features and will be more confusing in practice.

If I understand the comment, that asm is there to avoid compiler
optimization of *knowing* that kfunc exists (it's extern is resolved
to something other than 0), even if kfunc's ksym is not declared with
__weak.

But that's actually bad and misleading, as even if code is written to
use kfunc as optional, libbpf will fail load even before we'll get to
kernel, as it won't be able to find ksym's BTF information in kernel
BTF. Optional kfunc *has* to be marked __weak.

__weak has a consistent semantics to indicate something that's
optional. This is documented (e.g., [0] for kconfig variables) We do
have tests making sure this works for weak __kconfig and variable
__ksyms. Let's add similar ones for kfunc ksyms.


  [0] https://nakryiko.com/posts/bpf-core-reference-guide/#kconfig-extern-variables


Just to demonstrate what I mentioned above, I tried this quick
experiment. Commented out block assumes that feature detection is done
by user-space and use_kfunc is set to true or false, depending on
whether that kfunc is detected. But if bpf_iter_num_new1 is defined as
non-weak __ksym, this fails with either use_kfunc=true or
use_kfunc=false. Which is correct behavior from libbpf's POV.

On the other hand, the second part, which your patch now makes
possible, is the proper way to detect if kfunc is defined and that
kfunc is defined as __weak. It works, even if kfunc is not present in
the kernel.


So I think bpf_kfunc_exists() will just hide and obfuscate the actual
issue (lack of __weak marking for something that's optional).

diff --git a/tools/testing/selftests/bpf/progs/test_vmlinux.c
b/tools/testing/selftests/bpf/progs/test_vmlinux.c
index 4b8e37f7fd06..92291a0727b7 100644
--- a/tools/testing/selftests/bpf/progs/test_vmlinux.c
+++ b/tools/testing/selftests/bpf/progs/test_vmlinux.c
@@ -6,15 +6,21 @@
 #include <bpf/bpf_helpers.h>
 #include <bpf/bpf_tracing.h>
 #include <bpf/bpf_core_read.h>
+#include "bpf_misc.h"

 #define MY_TV_NSEC 1337

+const volatile bool use_kfunc = false;
+
 bool tp_called = false;
 bool raw_tp_called = false;
 bool tp_btf_called = false;
 bool kprobe_called = false;
 bool fentry_called = false;

+extern int bpf_iter_num_new1(struct bpf_iter_num *it, int start, int
end) __ksym;
+extern int bpf_iter_num_new2(struct bpf_iter_num *it, int start, int
end) __ksym __weak;
+
 SEC("tp/syscalls/sys_enter_nanosleep")
 int handle__tp(struct trace_event_raw_sys_enter *args)
 {
@@ -24,6 +30,19 @@ int handle__tp(struct trace_event_raw_sys_enter *args)
        if (args->id != __NR_nanosleep)
                return 0;

+       /*
+       if (use_kfunc) { // fails
+               struct bpf_iter_num it;
+               bpf_iter_num_new1(&it, 0, 100);
+               bpf_iter_num_destroy(&it);
+       }
+       */
+       if (bpf_iter_num_new2) { // works
+               struct bpf_iter_num it;
+               bpf_iter_num_new2(&it, 0, 100);
+               bpf_iter_num_destroy(&it);
+       }
+
        ts = (void *)args->args[0];
        if (bpf_probe_read_user(&tv_nsec, sizeof(ts->tv_nsec), &ts->tv_nsec) ||
            tv_nsec != MY_TV_NSEC)


>  #ifndef ___bpf_concat
>  #define ___bpf_concat(a, b) a ## b
>  #endif
> --
> 2.34.1
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ