lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 25 Jan 2019 14:51:12 -0800
From:   "Paul E. McKenney" <paulmck@...ux.ibm.com>
To:     Jann Horn <jannh@...gle.com>
Cc:     Alexei Starovoitov <alexei.starovoitov@...il.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Alexei Starovoitov <ast@...nel.org>,
        "David S. Miller" <davem@...emloft.net>,
        Daniel Borkmann <daniel@...earbox.net>,
        jakub.kicinski@...ronome.com,
        Network Development <netdev@...r.kernel.org>,
        kernel-team@...com, Ingo Molnar <mingo@...hat.com>,
        Will Deacon <will.deacon@....com>
Subject: Re: [PATCH v4 bpf-next 1/9] bpf: introduce bpf_spin_lock

On Fri, Jan 25, 2019 at 05:18:12PM +0100, Jann Horn wrote:
> On Fri, Jan 25, 2019 at 5:12 AM Paul E. McKenney <paulmck@...ux.ibm.com> wrote:
> > On Fri, Jan 25, 2019 at 02:46:55AM +0100, Jann Horn wrote:
> > > On Fri, Jan 25, 2019 at 2:22 AM Paul E. McKenney <paulmck@...ux.ibm.com> wrote:
> > > > On Thu, Jan 24, 2019 at 04:05:16PM -0800, Alexei Starovoitov wrote:
> > > > > On Thu, Jan 24, 2019 at 03:42:32PM -0800, Paul E. McKenney wrote:
> > > > > > On Thu, Jan 24, 2019 at 07:56:52PM +0100, Peter Zijlstra wrote:
> > > > > > > On Thu, Jan 24, 2019 at 07:01:09PM +0100, Peter Zijlstra wrote:
> > > > > > > >
> > > > > > > > Thanks for having kernel/locking people on Cc...
> > > > > > > >
> > > > > > > > On Wed, Jan 23, 2019 at 08:13:55PM -0800, Alexei Starovoitov wrote:
> > > > > > > >
> > > > > > > > > Implementation details:
> > > > > > > > > - on !SMP bpf_spin_lock() becomes nop
> > > > > > > >
> > > > > > > > Because no BPF program is preemptible? I don't see any assertions or
> > > > > > > > even a comment that says this code is non-preemptible.
> > > > > > > >
> > > > > > > > AFAICT some of the BPF_RUN_PROG things are under rcu_read_lock() only,
> > > > > > > > which is not sufficient.
> > > > > > > >
> > > > > > > > > - on architectures that don't support queued_spin_lock trivial lock is used.
> > > > > > > > >   Note that arch_spin_lock cannot be used, since not all archs agree that
> > > > > > > > >   zero == unlocked and sizeof(arch_spinlock_t) != sizeof(__u32).
> > > > > > > >
> > > > > > > > I really don't much like direct usage of qspinlock; esp. not as a
> > > > > > > > surprise.
> > > > > >
> > > > > > Substituting the lightweight-reader SRCU as discussed earlier would allow
> > > > > > use of a more generic locking primitive, for example, one that allowed
> > > > > > blocking, at least in cases were the context allowed this.
> > > > > >
> > > > > > git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git
> > > > > > branch srcu-lr.2019.01.16a.
> > > > > >
> > > > > > One advantage of a more generic locking primitive would be keeping BPF
> > > > > > programs independent of internal changes to spinlock primitives.
> > > > >
> > > > > Let's keep "srcu in bpf" discussion separate from bpf_spin_lock discussion.
> > > > > bpf is not switching to srcu any time soon.
> > > > > If/when it happens it will be only for certain prog+map types
> > > > > like bpf syscall probes that need to be able to do copy_from_user
> > > > > from bpf prog.
> > > >
> > > > Hmmm...  What prevents BPF programs from looping infinitely within an
> > > > RCU reader, and as you noted, preemption disabled?
> > > >
> > > > If BPF programs are in fact allowed to loop infinitely, it would be
> > > > very good for the health of the kernel to have preemption enabled.
> > > > And to be within an SRCU read-side critical section instead of an RCU
> > > > read-side critical section.
> > >
> > > The BPF verifier prevents loops; this is in push_insn() in
> > > kernel/bpf/verifier.c, which errors out with -EINVAL when a back edge
> > > is encountered. For non-root programs, that limits the maximum number
> > > of instructions per eBPF engine execution to
> > > BPF_MAXINSNS*MAX_TAIL_CALL_CNT==4096*32==131072 (but that includes
> > > call instructions, which can cause relatively expensive operations
> > > like hash table lookups). For programs created with CAP_SYS_ADMIN,
> > > things get more tricky because you can create your own functions and
> > > call them repeatedly; I'm not sure whether the pessimal runtime there
> > > becomes exponential, or whether there is some check that catches this.
> >
> > Whew!!!  ;-)
> >
> > So no more than (say) 100 milliseconds?
> 
> Depends on RLIMIT_MEMLOCK and on how hard userspace is trying to make
> things slow, I guess - if userspace manages to create a hashtable,
> with a few dozen megabytes in size, with worst-case assignment of
> elements to buckets (everything in a single bucket), every lookup call
> on that bucket becomes a linked list traversal through a list that
> must be stored in main memory because it's too big for the CPU caches.
> I don't know into how much time that translates.

So perhaps you have a candidate BPF program for the RCU CPU stall warning
challenge, then.  ;-)

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ