lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 9 Jul 2018 15:34:41 -0700
From:   Alexei Starovoitov <alexei.starovoitov@...il.com>
To:     Daniel Colascione <dancol@...gle.com>
Cc:     Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
        Joel Fernandes <joelaf@...gle.com>,
        Alexei Starovoitov <ast@...com>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        Tim Murray <timmurray@...gle.com>,
        Daniel Borkmann <daniel@...earbox.net>,
        netdev <netdev@...r.kernel.org>, Chenbo Feng <fengc@...gle.com>
Subject: Re: [RFC] Add BPF_SYNCHRONIZE bpf(2) command

On Mon, Jul 09, 2018 at 03:21:43PM -0700, Daniel Colascione wrote:
> On Mon, Jul 9, 2018 at 3:10 PM, Alexei Starovoitov
> <alexei.starovoitov@...il.com> wrote:
> > On Mon, Jul 09, 2018 at 02:36:37PM -0700, Daniel Colascione wrote:
> >> On Mon, Jul 9, 2018 at 2:09 PM, Alexei Starovoitov
> >> <alexei.starovoitov@...il.com> wrote:
> >> > On Sun, Jul 08, 2018 at 04:54:38PM -0400, Mathieu Desnoyers wrote:
> >> >> ----- On Jul 7, 2018, at 4:33 PM, Joel Fernandes joelaf@...gle.com wrote:
> >> >>
> >> >> > On Fri, Jul 06, 2018 at 07:54:28PM -0700, Alexei Starovoitov wrote:
> >> >> >> On Fri, Jul 06, 2018 at 06:56:16PM -0700, Daniel Colascione wrote:
> >> >> >> > BPF_SYNCHRONIZE waits for any BPF programs active at the time of
> >> >> >> > BPF_SYNCHRONIZE to complete, allowing userspace to ensure atomicity of
> >> >> >> > RCU data structure operations with respect to active programs. For
> >> >> >> > example, userspace can update a map->map entry to point to a new map,
> >> >> >> > use BPF_SYNCHRONIZE to wait for any BPF programs using the old map to
> >> >> >> > complete, and then drain the old map without fear that BPF programs
> >> >> >> > may still be updating it.
> >> >> >> >
> >> >> >> > Signed-off-by: Daniel Colascione <dancol@...gle.com>
> >> >> >> > ---
> >> >> >> >  include/uapi/linux/bpf.h |  1 +
> >> >> >> >  kernel/bpf/syscall.c     | 14 ++++++++++++++
> >> >> >> >  2 files changed, 15 insertions(+)
> >> >> >> >
> >> >> >> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> >> >> >> > index b7db3261c62d..4365c50e8055 100644
> >> >> >> > --- a/include/uapi/linux/bpf.h
> >> >> >> > +++ b/include/uapi/linux/bpf.h
> >> >> >> > @@ -98,6 +98,7 @@ enum bpf_cmd {
> >> >> >> >          BPF_BTF_LOAD,
> >> >> >> >          BPF_BTF_GET_FD_BY_ID,
> >> >> >> >          BPF_TASK_FD_QUERY,
> >> >> >> > +        BPF_SYNCHRONIZE,
> >> >> >> >  };
> >> >> >> >
> >> >> >> >  enum bpf_map_type {
> >> >> >> > diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> >> >> >> > index d10ecd78105f..60ec7811846e 100644
> >> >> >> > --- a/kernel/bpf/syscall.c
> >> >> >> > +++ b/kernel/bpf/syscall.c
> >> >> >> > @@ -2272,6 +2272,20 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *,
> >> >> >> > uattr, unsigned int, siz
> >> >> >> >          if (sysctl_unprivileged_bpf_disabled && !capable(CAP_SYS_ADMIN))
> >> >> >> >                  return -EPERM;
> >> >> >> >
> >> >> >> > +        if (cmd == BPF_SYNCHRONIZE) {
> >> >> >> > +                if (uattr != NULL || size != 0)
> >> >> >> > +                        return -EINVAL;
> >> >> >> > +                err = security_bpf(cmd, NULL, 0);
> >> >> >> > +                if (err < 0)
> >> >> >> > +                        return err;
> >> >> >> > +                /* BPF programs are run with preempt disabled, so
> >> >> >> > +                 * synchronize_sched is sufficient even with
> >> >> >> > +                 * RCU_PREEMPT.
> >> >> >> > +                 */
> >> >> >> > +                synchronize_sched();
> >> >> >> > +                return 0;
> >> >> >>
> >> >> >> I don't think it's necessary. sys_membarrier() can do this already
> >> >> >> and some folks use it exactly for this use case.
> >> >> >
> >> >> > Alexei, the use of sys_membarrier for this purpose seems kind of weird to me
> >> >> > though. No where does the manpage say membarrier should be implemented this
> >> >> > way so what happens if the implementation changes?
> >> >> >
> >> >> > Further, membarrier manpage says that a memory barrier should be matched with
> >> >> > a matching barrier. In this use case there is no matching barrier, so it
> >> >> > makes it weirder.
> >> >> >
> >> >> > Lastly, sys_membarrier seems will not work on nohz-full systems, so its a bit
> >> >> > fragile to depend on it for this?
> >> >> >
> >> >> >        case MEMBARRIER_CMD_GLOBAL:
> >> >> >                /* MEMBARRIER_CMD_GLOBAL is not compatible with nohz_full. */
> >> >> >                if (tick_nohz_full_enabled())
> >> >> >                        return -EINVAL;
> >> >> >                if (num_online_cpus() > 1)
> >> >> >                        synchronize_sched();
> >> >> >                return 0;
> >> >> >
> >> >> >
> >> >> > Adding Mathieu as well who I believe is author/maintainer of membarrier.
> >> >>
> >> >> See commit 907565337
> >> >> "Fix: Disable sys_membarrier when nohz_full is enabled"
> >> >>
> >> >> "Userspace applications should be allowed to expect the membarrier system
> >> >> call with MEMBARRIER_CMD_SHARED command to issue memory barriers on
> >> >> nohz_full CPUs, but synchronize_sched() does not take those into
> >> >> account."
> >> >>
> >> >> So AFAIU you'd want to re-use membarrier to issue synchronize_sched, and you
> >> >> only care about kernel preempt off critical sections.
> >> >>
> >> >> Clearly bpf code does not run in user-space, so it would "work".
> >> >>
> >> >> But the guarantees provided by membarrier are not to synchronize against
> >> >> preempt off per se. It's just that the current implementation happens to
> >> >> do that. The point of membarrier is to turn user-space memory barriers
> >> >> into compiler barriers.
> >> >>
> >> >> If what you need is to wait for a RCU grace period for whatever RCU flavor
> >> >> ebpf is using, I would against using membarrier for this. I would rather
> >> >> recommend adding a dedicated BPF_SYNCHRONIZE so you won't leak
> >> >> implementation details to user-space, *and* you can eventually change you
> >> >> RCU implementation for e.g. SRCU in the future if needed.
> >> >
> >> > The point about future changes to underlying bpf mechanisms is valid.
> >> > There is work already on the way to reduce the scope of preempt_off+rcu_lock
> >> > that currently lasts the whole prog. We will have new prog types that won't
> >> > have such wrappers and will do rcu_lock/unlock and preempt on/off only
> >> > when necessary.
> >> > So something like BPF_SYNCHRONIZE will break soon, since the kernel cannot have
> >> > guarantees on when programs finish. Calling this command BPF_SYNCHRONIZE_PROG
> >> > also won't make sense for the same reason.
> >> > What we can do it instead is to define synchronization barrier for
> >> > programs accessing maps. May be call it something like:
> >> > BPF_SYNC_MAP_ACCESS ?
> >>
> >> I'm not sure what you're proposing. In the case the commit message
> >> describes, a user-space program that wants to "drain" a map needs to
> >> be confident that the map won't change under it, even across multiple
> >> bpf system calls on that map. One way of doing that is to ensure that
> >> nothing that could possibly hold a reference to that map is still
> >> running. Are you proposing some kind of refcount-draining approach?
> >> Simple locking won't work, since BPF programs can't block, and I don't
> >> see right now how a simple barrier would help.
> >
> > I'm proposing few changes for your patch:
> > s/BPF_SYNCHRONIZE/BPF_SYNC_MAP_ACCESS/
> > and s/synchronize_sched/synchronize_rcu/
> > with detailed comment in uapi/bpf.h that has an example why folks
> > would want to use this new cmd.
> 
> Thanks for clarifying.
> 
> > I think the bpf maps will be rcu protected for foreseeable future
> > even when rcu_read_lock/unlock will be done by the programs instead of
> > kernel wrappers.
> 
> Can we guarantee that we always obtain a map reference and dispose of
> that reference inside the same critical section? 

yep. the verifier will guarantee that.

> If so, can BPF
> programs then disable preemption for as long as they'd like?

you mean after the finish? no. only while running.
The verifier will match things like lookup/release, lock/unlock, preempt on/off
and will make sure there is no dangling preempt disable after program returns.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ