lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAADnVQ+Ed86oOZPA1rOn_COKPpH1917Q6QUtETkciC8L8+u22A@mail.gmail.com>
Date:   Thu, 11 Jun 2020 15:29:09 -0700
From:   Alexei Starovoitov <alexei.starovoitov@...il.com>
To:     "David S. Miller" <davem@...emloft.net>,
        Paul McKenney <paulmckrcu@...il.com>
Cc:     Daniel Borkmann <daniel@...earbox.net>,
        "Paul E. McKenney" <paulmck@...nel.org>,
        Network Development <netdev@...r.kernel.org>,
        bpf <bpf@...r.kernel.org>, Kernel Team <kernel-team@...com>
Subject: Re: [PATCH RFC v3 bpf-next 1/4] bpf: Introduce sleepable BPF programs

On Thu, Jun 11, 2020 at 3:23 PM Alexei Starovoitov
<alexei.starovoitov@...il.com> wrote:
>
>  /* dummy _ops. The verifier will operate on target program's ops. */
>  const struct bpf_verifier_ops bpf_extension_verifier_ops = {
> @@ -205,14 +206,12 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr)
>             tprogs[BPF_TRAMP_MODIFY_RETURN].nr_progs)
>                 flags = BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_SKIP_FRAME;
>
> -       /* Though the second half of trampoline page is unused a task could be
> -        * preempted in the middle of the first half of trampoline and two
> -        * updates to trampoline would change the code from underneath the
> -        * preempted task. Hence wait for tasks to voluntarily schedule or go
> -        * to userspace.
> +       /* the same trampoline can hold both sleepable and non-sleepable progs.
> +        * synchronize_rcu_tasks_trace() is needed to make sure all sleepable
> +        * programs finish executing. It also ensures that the rest of
> +        * generated tramopline assembly finishes before updating trampoline.
>          */
> -
> -       synchronize_rcu_tasks();
> +       synchronize_rcu_tasks_trace();

Hi Paul,

I've been looking at rcu_trace implementation and I think above change
is correct.
Could you please double check my understanding?

Also see benchmarking numbers in the cover letter :)

>         err = arch_prepare_bpf_trampoline(new_image, new_image + PAGE_SIZE / 2,
>                                           &tr->func.model, flags, tprogs,
> @@ -344,7 +343,14 @@ void bpf_trampoline_put(struct bpf_trampoline *tr)
>         if (WARN_ON_ONCE(!hlist_empty(&tr->progs_hlist[BPF_TRAMP_FEXIT])))
>                 goto out;
>         bpf_image_ksym_del(&tr->ksym);
> -       /* wait for tasks to get out of trampoline before freeing it */
> +       /* This code will be executed when all bpf progs (both sleepable and
> +        * non-sleepable) went through
> +        * bpf_prog_put()->call_rcu[_tasks_trace]()->bpf_prog_free_deferred().
> +        * Hence no need for another synchronize_rcu_tasks_trace() here,
> +        * but synchronize_rcu_tasks() is still needed, since trampoline
> +        * may not have had any sleepable programs and we need to wait
> +        * for tasks to get out of trampoline code before freeing it.
> +        */
>         synchronize_rcu_tasks();
>         bpf_jit_free_exec(tr->image);
>         hlist_del(&tr->hlist);
> @@ -394,6 +400,21 @@ void notrace __bpf_prog_exit(struct bpf_prog *prog, u64 start)
>         rcu_read_unlock();
>  }
>
> +/* when rcu_read_lock_trace is held it means that some sleepable bpf program is
> + * running. Those programs can use bpf arrays and preallocated hash maps. These
> + * map types are waiting on programs to complete via
> + * synchronize_rcu_tasks_trace();
> + */
> +void notrace __bpf_prog_enter_sleepable(void)
> +{
> +       rcu_read_lock_trace();
> +}
> +
> +void notrace __bpf_prog_exit_sleepable(void)
> +{
> +       rcu_read_unlock_trace();
> +}
> +

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ