[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180710164212.GY3593@linux.vnet.ibm.com>
Date: Tue, 10 Jul 2018 09:42:12 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Joel Fernandes <joel@...lfernandes.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Joel Fernandes <joelaf@...gle.com>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
Daniel Colascione <dancol@...gle.com>,
Alexei Starovoitov <ast@...com>,
linux-kernel <linux-kernel@...r.kernel.org>,
Tim Murray <timmurray@...gle.com>,
Daniel Borkmann <daniel@...earbox.net>,
netdev <netdev@...r.kernel.org>, fengc@...gle.com
Subject: Re: [RFC] Add BPF_SYNCHRONIZE bpf(2) command
On Mon, Jul 09, 2018 at 10:13:47PM -0700, Joel Fernandes wrote:
> On Sun, Jul 08, 2018 at 04:54:38PM -0400, Mathieu Desnoyers wrote:
> > ----- On Jul 7, 2018, at 4:33 PM, Joel Fernandes joelaf@...gle.com wrote:
> >
> > > On Fri, Jul 06, 2018 at 07:54:28PM -0700, Alexei Starovoitov wrote:
> > >> On Fri, Jul 06, 2018 at 06:56:16PM -0700, Daniel Colascione wrote:
> > >> > BPF_SYNCHRONIZE waits for any BPF programs active at the time of
> > >> > BPF_SYNCHRONIZE to complete, allowing userspace to ensure atomicity of
> > >> > RCU data structure operations with respect to active programs. For
> > >> > example, userspace can update a map->map entry to point to a new map,
> > >> > use BPF_SYNCHRONIZE to wait for any BPF programs using the old map to
> > >> > complete, and then drain the old map without fear that BPF programs
> > >> > may still be updating it.
> > >> >
> > >> > Signed-off-by: Daniel Colascione <dancol@...gle.com>
> > >> > ---
> > >> > include/uapi/linux/bpf.h | 1 +
> > >> > kernel/bpf/syscall.c | 14 ++++++++++++++
> > >> > 2 files changed, 15 insertions(+)
> > >> >
> > >> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > >> > index b7db3261c62d..4365c50e8055 100644
> > >> > --- a/include/uapi/linux/bpf.h
> > >> > +++ b/include/uapi/linux/bpf.h
> > >> > @@ -98,6 +98,7 @@ enum bpf_cmd {
> > >> > BPF_BTF_LOAD,
> > >> > BPF_BTF_GET_FD_BY_ID,
> > >> > BPF_TASK_FD_QUERY,
> > >> > + BPF_SYNCHRONIZE,
> > >> > };
> > >> >
> > >> > enum bpf_map_type {
> > >> > diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> > >> > index d10ecd78105f..60ec7811846e 100644
> > >> > --- a/kernel/bpf/syscall.c
> > >> > +++ b/kernel/bpf/syscall.c
> > >> > @@ -2272,6 +2272,20 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *,
> > >> > uattr, unsigned int, siz
> > >> > if (sysctl_unprivileged_bpf_disabled && !capable(CAP_SYS_ADMIN))
> > >> > return -EPERM;
> > >> >
> > >> > + if (cmd == BPF_SYNCHRONIZE) {
> > >> > + if (uattr != NULL || size != 0)
> > >> > + return -EINVAL;
> > >> > + err = security_bpf(cmd, NULL, 0);
> > >> > + if (err < 0)
> > >> > + return err;
> > >> > + /* BPF programs are run with preempt disabled, so
> > >> > + * synchronize_sched is sufficient even with
> > >> > + * RCU_PREEMPT.
> > >> > + */
> > >> > + synchronize_sched();
> > >> > + return 0;
> > >>
> > >> I don't think it's necessary. sys_membarrier() can do this already
> > >> and some folks use it exactly for this use case.
> > >
> > > Alexei, the use of sys_membarrier for this purpose seems kind of weird to me
> > > though. No where does the manpage say membarrier should be implemented this
> > > way so what happens if the implementation changes?
> > >
> > > Further, membarrier manpage says that a memory barrier should be matched with
> > > a matching barrier. In this use case there is no matching barrier, so it
> > > makes it weirder.
> > >
> > > Lastly, sys_membarrier seems will not work on nohz-full systems, so its a bit
> > > fragile to depend on it for this?
> > >
> > > case MEMBARRIER_CMD_GLOBAL:
> > > /* MEMBARRIER_CMD_GLOBAL is not compatible with nohz_full. */
> > > if (tick_nohz_full_enabled())
> > > return -EINVAL;
> > > if (num_online_cpus() > 1)
> > > synchronize_sched();
> > > return 0;
> > >
> > >
> > > Adding Mathieu as well who I believe is author/maintainer of membarrier.
> >
> > See commit 907565337
> > "Fix: Disable sys_membarrier when nohz_full is enabled"
> >
> > "Userspace applications should be allowed to expect the membarrier system
> > call with MEMBARRIER_CMD_SHARED command to issue memory barriers on
> > nohz_full CPUs, but synchronize_sched() does not take those into
> > account."
> >
> > So AFAIU you'd want to re-use membarrier to issue synchronize_sched, and you
> > only care about kernel preempt off critical sections.
>
> Mathieu, Thanks a lot for your reply. I understand what you said and agree
> with you. Slight OT, but I tried to go back to first principles and
> understand how membarrier() uses synchronize_sched() for the "slow path" and
> it didn't make immediate sense to me. Let me clarify my dillema..
>
> My understanding is membarrier's MEMBARRIER_CMD_GLOBAL will employ
> synchronize_sched to make sure all other CPUs aren't executing anymore in an
> section of usercode that happen to be accessing memory that was written to
> before the membarrier call was made. To do this, the system call will use
> synchronize_sched to try to guarantee that all user-mode execution that
> started before the membarrier call would be completed when the membarrier
> call returns. This guarantees that without using a real memory barrier on the
> "fast path", things work just fine and everyone wins.
>
> But, going through RCU code, I see that a "RCU-sched quiecent state" on a CPU
> may be reached when the CPU receives a timer tick while executing in user
> mode:
>
> void rcu_check_callbacks(int user)
> {
> trace_rcu_utilization(TPS("Start scheduler-tick"));
> increment_cpu_stall_ticks();
> if (user || rcu_is_cpu_rrupt_from_idle()) {
> [...]
> rcu_sched_qs();
> rcu_bh_qs();
>
> The problem I see is the CPU could be executing usermode code at the time of
> the RCU sched-QS. This IMO is enough reason for synchronize_sched() to
> return, because the CPU in question just reported a QS (assuming all other
> CPUs also happen to do so if they needed to).
This scenario will have inserted the needed smp_mb() into the userspace
instruction execution stream, as is required by the sys_membarrier
use cases.
> Then I am wondering how does the membarrier call even work, the tick could
> very well have interrupted the CPU while it was executing usermode code in
> the middle of a set of instructions performing memory accesses. Reporting a
> quiescent state at such an inopportune time would cause the membarrier call
> to prematurely return, no? Sorry if I missed something.
One way to think of sys_membarrier() is as something that promotes a
barrier() to an smp_mb(). This barrier then separates the target CPU's
accesses that the caller saw before the sys_membarrier() from that same
CPU's accesses that the caller will see after the sys_membarrier().
> The other question I have is about the whole "nohz-full doesn't work" thing.
> I didn't fully understand why. RCU is already tracking the state of nohz-full
> CPUs because the rcu dynticks code in (kernel/rcu/tree.c) monitors
> transitions to and from usermode even if the timer tick is turned off. So why
> would it not work?
In the nohz_full case, there is no need for sys_membarrier()'s call to
synchronize_sched() to interact directly with the nohz_full CPU. It
can instead look at the target CPU's dyntick-idle state, and that state
would potentially have been set in the dim distant past, thus having
no effect on the target CPU's current execution.
Thanx, Paul
Powered by blists - more mailing lists