[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180709221954.zo4626ggufija4g2@ast-mbp.dhcp.thefacebook.com>
Date: Mon, 9 Jul 2018 15:19:55 -0700
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Joel Fernandes <joelaf@...gle.com>,
Daniel Colascione <dancol@...gle.com>,
Alexei Starovoitov <ast@...com>,
linux-kernel <linux-kernel@...r.kernel.org>,
Tim Murray <timmurray@...gle.com>,
Daniel Borkmann <daniel@...earbox.net>,
netdev <netdev@...r.kernel.org>, fengc <fengc@...gle.com>
Subject: Re: [RFC] Add BPF_SYNCHRONIZE bpf(2) command
On Mon, Jul 09, 2018 at 03:19:03PM -0700, Paul E. McKenney wrote:
> On Mon, Jul 09, 2018 at 05:35:34PM -0400, Mathieu Desnoyers wrote:
> >
> >
> > ----- On Jul 9, 2018, at 5:09 PM, Alexei Starovoitov alexei.starovoitov@...il.com wrote:
> >
> > > On Sun, Jul 08, 2018 at 04:54:38PM -0400, Mathieu Desnoyers wrote:
> > >> ----- On Jul 7, 2018, at 4:33 PM, Joel Fernandes joelaf@...gle.com wrote:
> > >>
> > >> > On Fri, Jul 06, 2018 at 07:54:28PM -0700, Alexei Starovoitov wrote:
> > >> >> On Fri, Jul 06, 2018 at 06:56:16PM -0700, Daniel Colascione wrote:
> > >> >> > BPF_SYNCHRONIZE waits for any BPF programs active at the time of
> > >> >> > BPF_SYNCHRONIZE to complete, allowing userspace to ensure atomicity of
> > >> >> > RCU data structure operations with respect to active programs. For
> > >> >> > example, userspace can update a map->map entry to point to a new map,
> > >> >> > use BPF_SYNCHRONIZE to wait for any BPF programs using the old map to
> > >> >> > complete, and then drain the old map without fear that BPF programs
> > >> >> > may still be updating it.
> > >> >> >
> > >> >> > Signed-off-by: Daniel Colascione <dancol@...gle.com>
> > >> >> > ---
> > >> >> > include/uapi/linux/bpf.h | 1 +
> > >> >> > kernel/bpf/syscall.c | 14 ++++++++++++++
> > >> >> > 2 files changed, 15 insertions(+)
> > >> >> >
> > >> >> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > >> >> > index b7db3261c62d..4365c50e8055 100644
> > >> >> > --- a/include/uapi/linux/bpf.h
> > >> >> > +++ b/include/uapi/linux/bpf.h
> > >> >> > @@ -98,6 +98,7 @@ enum bpf_cmd {
> > >> >> > BPF_BTF_LOAD,
> > >> >> > BPF_BTF_GET_FD_BY_ID,
> > >> >> > BPF_TASK_FD_QUERY,
> > >> >> > + BPF_SYNCHRONIZE,
> > >> >> > };
> > >> >> >
> > >> >> > enum bpf_map_type {
> > >> >> > diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> > >> >> > index d10ecd78105f..60ec7811846e 100644
> > >> >> > --- a/kernel/bpf/syscall.c
> > >> >> > +++ b/kernel/bpf/syscall.c
> > >> >> > @@ -2272,6 +2272,20 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *,
> > >> >> > uattr, unsigned int, siz
> > >> >> > if (sysctl_unprivileged_bpf_disabled && !capable(CAP_SYS_ADMIN))
> > >> >> > return -EPERM;
> > >> >> >
> > >> >> > + if (cmd == BPF_SYNCHRONIZE) {
> > >> >> > + if (uattr != NULL || size != 0)
> > >> >> > + return -EINVAL;
> > >> >> > + err = security_bpf(cmd, NULL, 0);
> > >> >> > + if (err < 0)
> > >> >> > + return err;
> > >> >> > + /* BPF programs are run with preempt disabled, so
> > >> >> > + * synchronize_sched is sufficient even with
> > >> >> > + * RCU_PREEMPT.
> > >> >> > + */
> > >> >> > + synchronize_sched();
> > >> >> > + return 0;
> > >> >>
> > >> >> I don't think it's necessary. sys_membarrier() can do this already
> > >> >> and some folks use it exactly for this use case.
> > >> >
> > >> > Alexei, the use of sys_membarrier for this purpose seems kind of weird to me
> > >> > though. No where does the manpage say membarrier should be implemented this
> > >> > way so what happens if the implementation changes?
> > >> >
> > >> > Further, membarrier manpage says that a memory barrier should be matched with
> > >> > a matching barrier. In this use case there is no matching barrier, so it
> > >> > makes it weirder.
> > >> >
> > >> > Lastly, sys_membarrier seems will not work on nohz-full systems, so its a bit
> > >> > fragile to depend on it for this?
> > >> >
> > >> > case MEMBARRIER_CMD_GLOBAL:
> > >> > /* MEMBARRIER_CMD_GLOBAL is not compatible with nohz_full. */
> > >> > if (tick_nohz_full_enabled())
> > >> > return -EINVAL;
> > >> > if (num_online_cpus() > 1)
> > >> > synchronize_sched();
> > >> > return 0;
> > >> >
> > >> >
> > >> > Adding Mathieu as well who I believe is author/maintainer of membarrier.
> > >>
> > >> See commit 907565337
> > >> "Fix: Disable sys_membarrier when nohz_full is enabled"
> > >>
> > >> "Userspace applications should be allowed to expect the membarrier system
> > >> call with MEMBARRIER_CMD_SHARED command to issue memory barriers on
> > >> nohz_full CPUs, but synchronize_sched() does not take those into
> > >> account."
> > >>
> > >> So AFAIU you'd want to re-use membarrier to issue synchronize_sched, and you
> > >> only care about kernel preempt off critical sections.
> > >>
> > >> Clearly bpf code does not run in user-space, so it would "work".
> > >>
> > >> But the guarantees provided by membarrier are not to synchronize against
> > >> preempt off per se. It's just that the current implementation happens to
> > >> do that. The point of membarrier is to turn user-space memory barriers
> > >> into compiler barriers.
> > >>
> > >> If what you need is to wait for a RCU grace period for whatever RCU flavor
> > >> ebpf is using, I would against using membarrier for this. I would rather
> > >> recommend adding a dedicated BPF_SYNCHRONIZE so you won't leak
> > >> implementation details to user-space, *and* you can eventually change you
> > >> RCU implementation for e.g. SRCU in the future if needed.
> > >
> > > The point about future changes to underlying bpf mechanisms is valid.
> > > There is work already on the way to reduce the scope of preempt_off+rcu_lock
> > > that currently lasts the whole prog. We will have new prog types that won't
> > > have such wrappers and will do rcu_lock/unlock and preempt on/off only
> > > when necessary.
> > > So something like BPF_SYNCHRONIZE will break soon, since the kernel cannot have
> > > guarantees on when programs finish. Calling this command BPF_SYNCHRONIZE_PROG
> > > also won't make sense for the same reason.
> > > What we can do it instead is to define synchronization barrier for
> > > programs accessing maps. May be call it something like:
> > > BPF_SYNC_MAP_ACCESS ?
> > > uapi/bpf.h would need to have extensive comment what this barrier is doing.
> > > Implementation should probably call synchronize_rcu() and not play games
> > > with synchronize_sched(), since that's going too much into implementation.
> > > Also should such sys_bpf command be root only?
> > > I'm not sure whether dos attack can be made by spamming synchronize_rcu()
> > > and synchronize_sched() for that matter.
> >
> > Adding Paul E. McKenney in CC. He may want to share his thoughts on the matter.
>
> Let's see...
>
> Spamming synchronize_rcu() and synchronize_sched() should be a non-event,
> at least aside from the CPUs doing the spamming. The reason for this
> is that a given task can only fire off a single synchronize_sched or
> synchronize_rcu() per few milliseconds, so you need a -lot- of tasks
> to have much effect, at which point the sheer number of tasks is much
> more a problem than the large number of outstanding synchronize_rcu()
> or synchronize_sched() invocations.
>
> I very strongly agree that usermode should have a operation that
> synchronizes with whatever eBPF uses, rather than something that forces
> a specific type of RCU grace period.
>
> Finally, in a few releases, synchronize_sched() will be retiring in favor
> of synchronize_rcu(), which will wait on preemption-disabled regions of
> code in addition to waiting on RCU read-side critical sections. Not a
> big deal, as I expect to enlist Coccinelle's aid in this.
>
> Did I manage to hit all the high points?
Thanks. It's ok for new cmd being unpriv then.
Powered by blists - more mailing lists