lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 7 Nov 2019 21:13:36 -0800
From:   Andrii Nakryiko <andrii.nakryiko@...il.com>
To:     Alexei Starovoitov <ast@...nel.org>
Cc:     "David S. Miller" <davem@...emloft.net>,
        Daniel Borkmann <daniel@...earbox.net>, x86@...nel.org,
        Networking <netdev@...r.kernel.org>, bpf <bpf@...r.kernel.org>,
        Kernel Team <kernel-team@...com>
Subject: Re: [PATCH v2 bpf-next 12/17] bpf: Fix race in btf_resolve_helper_id()

On Wed, Nov 6, 2019 at 9:48 PM Alexei Starovoitov <ast@...nel.org> wrote:
>
> btf_resolve_helper_id() caching logic is racy, since under root the verifier
> can verify several programs in parallel. Fix it with extra spin_lock.
>
> Fixes: a7658e1a4164 ("bpf: Check types of arguments passed into helpers")
> Signed-off-by: Alexei Starovoitov <ast@...nel.org>
> ---
>  include/linux/bpf.h   |  3 ++-
>  kernel/bpf/btf.c      | 34 +++++++++++++++++++++++++++++++++-
>  kernel/bpf/verifier.c |  6 +-----
>  3 files changed, 36 insertions(+), 7 deletions(-)
>

[...]

> +       /* ok to race the search. The result is the same */
> +       ret = __btf_resolve_helper_id(log, fn->func, arg);
> +       if (!ret) {
> +               bpf_log(log, "BTF resolution bug\n");
> +               return -EFAULT;
> +       }
> +       spin_lock(&btf_resolve_lock);
> +       if (*btf_id) {
> +               ret = *btf_id;
> +               goto out;
> +       }
> +       *btf_id = ret;
> +out:
> +       spin_unlock(&btf_resolve_lock);

Is this race a problem? Does it cause any issues? Given that even if
you do parallel resolutions at the same time, they all will have to
result in the same btf_id, so just setting it unconditionally multiple
times without locking should be ok, no? Maybe WRITE_ONCE, but not sure
why all the way to spinlock.


> +       return ret;
> +}
> +
>  static int __get_type_size(struct btf *btf, u32 btf_id,
>                            const struct btf_type **bad_type)
>  {

[...]

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ