[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200306172140.GA237112@google.com>
Date: Fri, 6 Mar 2020 12:21:40 -0500
From: Joel Fernandes <joel@...lfernandes.org>
To: Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
LKML <linux-kernel@...r.kernel.org>,
linux-arch <linux-arch@...r.kernel.org>,
Steven Rostedt <rostedt@...dmis.org>,
Ingo Molnar <mingo@...nel.org>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
"Gustavo A. R. Silva" <gustavo@...eddedor.com>,
Thomas Gleixner <tglx@...utronix.de>,
"Paul E. McKenney" <paulmck@...nel.org>,
Josh Triplett <josh@...htriplett.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
jiangshanlai@...il.com, Andy Lutomirski <luto@...nel.org>,
Tony Luck <tony.luck@...el.com>,
Frederic Weisbecker <frederic@...nel.org>,
Dan Carpenter <dan.carpenter@...cle.com>,
Masami Hiramatsu <mhiramat@...nel.org>
Subject: Re: [PATCH v4 16/27] tracing: Remove regular RCU context for
_rcuidle tracepoints (again)
On Fri, Mar 06, 2020 at 07:51:18AM -0800, Alexei Starovoitov wrote:
> On Fri, Mar 6, 2020 at 3:31 AM Peter Zijlstra <peterz@...radead.org> wrote:
> >
> > On Fri, Mar 06, 2020 at 11:43:35AM +0100, Peter Zijlstra wrote:
> > > On Fri, Feb 21, 2020 at 02:34:32PM +0100, Peter Zijlstra wrote:
> > > > Effectively revert commit 865e63b04e9b2 ("tracing: Add back in
> > > > rcu_irq_enter/exit_irqson() for rcuidle tracepoints") now that we've
> > > > taught perf how to deal with not having an RCU context provided.
> > > >
> > > > Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> > > > Reviewed-by: Steven Rostedt (VMware) <rostedt@...dmis.org>
> > > > ---
> > > > include/linux/tracepoint.h | 8 ++------
> > > > 1 file changed, 2 insertions(+), 6 deletions(-)
> > > >
> > > > --- a/include/linux/tracepoint.h
> > > > +++ b/include/linux/tracepoint.h
> > > > @@ -179,10 +179,8 @@ static inline struct tracepoint *tracepo
> > > > * For rcuidle callers, use srcu since sched-rcu \
> > > > * doesn't work from the idle path. \
> > > > */ \
> > > > - if (rcuidle) { \
> > > > + if (rcuidle) \
> > > > __idx = srcu_read_lock_notrace(&tracepoint_srcu);\
> > > > - rcu_irq_enter_irqsave(); \
> > > > - } \
> > > > \
> > > > it_func_ptr = rcu_dereference_raw((tp)->funcs); \
> > > > \
> > > > @@ -194,10 +192,8 @@ static inline struct tracepoint *tracepo
> > > > } while ((++it_func_ptr)->func); \
> > > > } \
> > > > \
> > > > - if (rcuidle) { \
> > > > - rcu_irq_exit_irqsave(); \
> > > > + if (rcuidle) \
> > > > srcu_read_unlock_notrace(&tracepoint_srcu, __idx);\
> > > > - } \
> > > > \
> > > > preempt_enable_notrace(); \
> > > > } while (0)
> > >
> > > So what happens when BPF registers for these tracepoints? BPF very much
> > > wants RCU on AFAIU.
> >
> > I suspect we needs something like this...
> >
> > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > index a2f15222f205..67a39dbce0ce 100644
> > --- a/kernel/trace/bpf_trace.c
> > +++ b/kernel/trace/bpf_trace.c
> > @@ -1475,11 +1475,13 @@ void bpf_put_raw_tracepoint(struct bpf_raw_event_map *btp)
> > static __always_inline
> > void __bpf_trace_run(struct bpf_prog *prog, u64 *args)
> > {
> > + int rcu_flags = trace_rcu_enter();
> > rcu_read_lock();
> > preempt_disable();
> > (void) BPF_PROG_RUN(prog, args);
> > preempt_enable();
> > rcu_read_unlock();
> > + trace_rcu_exit(rcu_flags);
>
> One big NACK.
> I will not slowdown 99% of cases because of one dumb user.
> Absolutely no way.
For the 99% usecases, they incur an additional atomic_read and a branch, with
the above. Is that the concern? Just want to make sure we are talking about
same thing.
Speaking of slowdowns, you don't really need that rcu_read_lock/unlock()
pair in __bpf_trace_run() AFAICS. The rcu_read_unlock() can run into the
rcu_read_unlock_special() slowpath and if not, at least has branches. Most
importantly, RCU is consolidated which means preempt_disable() implies
rcu_read_lock().
thanks,
- Joel
Powered by blists - more mailing lists