[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190124234232.GY4240@linux.ibm.com>
Date: Thu, 24 Jan 2019 15:42:32 -0800
From: "Paul E. McKenney" <paulmck@...ux.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Alexei Starovoitov <ast@...nel.org>, davem@...emloft.net,
daniel@...earbox.net, jakub.kicinski@...ronome.com,
netdev@...r.kernel.org, kernel-team@...com, mingo@...hat.com,
will.deacon@....com, jannh@...gle.com
Subject: Re: [PATCH v4 bpf-next 1/9] bpf: introduce bpf_spin_lock
On Thu, Jan 24, 2019 at 07:56:52PM +0100, Peter Zijlstra wrote:
> On Thu, Jan 24, 2019 at 07:01:09PM +0100, Peter Zijlstra wrote:
> >
> > Thanks for having kernel/locking people on Cc...
> >
> > On Wed, Jan 23, 2019 at 08:13:55PM -0800, Alexei Starovoitov wrote:
> >
> > > Implementation details:
> > > - on !SMP bpf_spin_lock() becomes nop
> >
> > Because no BPF program is preemptible? I don't see any assertions or
> > even a comment that says this code is non-preemptible.
> >
> > AFAICT some of the BPF_RUN_PROG things are under rcu_read_lock() only,
> > which is not sufficient.
> >
> > > - on architectures that don't support queued_spin_lock trivial lock is used.
> > > Note that arch_spin_lock cannot be used, since not all archs agree that
> > > zero == unlocked and sizeof(arch_spinlock_t) != sizeof(__u32).
> >
> > I really don't much like direct usage of qspinlock; esp. not as a
> > surprise.
Substituting the lightweight-reader SRCU as discussed earlier would allow
use of a more generic locking primitive, for example, one that allowed
blocking, at least in cases were the context allowed this.
git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git
branch srcu-lr.2019.01.16a.
One advantage of a more generic locking primitive would be keeping BPF
programs independent of internal changes to spinlock primitives.
Thanx, Paul
> > Why does it matter if 0 means unlocked; that's what
> > __ARCH_SPIN_LOCK_UNLOCKED is for.
> >
> > I get the sizeof(__u32) thing, but why not key off of that?
> >
> > > Next steps:
> > > - allow bpf_spin_lock in other map types (like cgroup local storage)
> > > - introduce BPF_F_LOCK flag for bpf_map_update() syscall and helper
> > > to request kernel to grab bpf_spin_lock before rewriting the value.
> > > That will serialize access to map elements.
> >
> > So clearly this map stuff is shared between bpf proglets, otherwise
> > there would not be a need for locking. But what happens if one is from
> > task context and another from IRQ context?
> >
> > I don't see a local_irq_save()/restore() anywhere. What avoids the
> > trivial lock inversion?
>
> Also; what about BPF running from NMI context and using locks?
>
Powered by blists - more mailing lists