[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200528221659.GS2483@worktop.programming.kicks-ass.net>
Date: Fri, 29 May 2020 00:16:59 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Joel Fernandes <joel@...lfernandes.org>
Cc: "Paul E. McKenney" <paulmck@...nel.org>,
Andrii Nakryiko <andriin@...com>,
Alan Stern <stern@...land.harvard.edu>, parri.andrea@...il.com,
will@...nel.org, boqun.feng@...il.com, npiggin@...il.com,
dhowells@...hat.com, j.alglave@....ac.uk, luc.maranget@...ia.fr,
akiyks@...il.com, dlustig@...dia.com, linux-kernel@...r.kernel.org,
linux-arch@...r.kernel.org,
"andrii.nakryiko@...il.com" <andrii.nakryiko@...il.com>
Subject: Re: Some -serious- BPF-related litmus tests
On Thu, May 28, 2020 at 06:00:47PM -0400, Joel Fernandes wrote:
> Any idea why this choice of locking-based ring buffer implementation in BPF?
> The ftrace ring buffer can support NMI interruptions as well for writes.
>
> Also, is it possible for BPF to reuse the ftrace ring buffer implementation
> or does it not meet the requirements?
Both perf and ftrace are per-cpu, which, according to the patch
description is too much memory overhead for them. Neither have ever
considered anything else, atomic ops are expensive.
On top of that, they want multi-producer support. Yes, doing that gets
interesting really fast, but using spinlocks gets you a trainwreck like
this.
This thing so readily wanting to drop data on the floor should worry
people, but apparently they've not spend enough time debugging stuff
with partial logs yet. Of course, bpf_prog_active already makes BPF
lossy, so maybe they went with that.
All reasons why I never bother with BPF, aside from it being more
difficult than hacking up a kernel in the first place.
Powered by blists - more mailing lists