[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170516090708.35fd8a12@gandalf.local.home>
Date: Tue, 16 May 2017 09:07:08 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@...nel.org>, linux-kernel@...r.kernel.org,
Masami Hiramatsu <mhiramat@...nel.org>
Subject: Re: Use case for TASKS_RCU
On Tue, 16 May 2017 05:23:54 -0700
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com> wrote:
> On Tue, May 16, 2017 at 08:22:33AM +0200, Ingo Molnar wrote:
> >
> > * Paul E. McKenney <paulmck@...ux.vnet.ibm.com> wrote:
> >
> > > Hello!
> > >
> > > The question of the use case for TASKS_RCU came up, and here is my
> > > understanding. Steve will not be shy about correcting any misconceptions
> > > I might have. ;-)
> > >
> > > The use case is to support freeing of trampolines used in tracing/probing
> > > in CONFIG_PREEMPT=y kernels. It is necessary to wait until any task
> > > executing in the trampoline in question has left it, taking into account
> > > that the trampoline's code might be interrupted and preempted. However,
> > > the code in the trampolines is guaranteed never to context switch.
> > >
> > > Note that in CONFIG_PREEMPT=n kernels, synchronize_sched() suffices.
> > > It is therefore tempting to think in terms of disabling preemption across
> > > the trampolines, but there is apparently not enough room to accommodate
> > > the needed preempt_disable() and preempt_enable() in the code invoking
> > > the trampoline, and putting the preempt_disable() and preempt_enable()
> > > in the trampoline itself fails because of the possibility of preemption
> > > just before the preempt_disable() and just after the preempt_enable().
> > > Similar reasoning rules out use of rcu_read_lock() and rcu_read_unlock().
> >
> > So how was this solved before TASKS_RCU? Also, nothing uses call_rcu_tasks() at
> > the moment, so it's hard for me to review its users. What am I missing?
>
> Before TASKS_RCU, the trampolines were just leaked when CONFIG_PREEMPT=y.
Actually, things simply were not implemented. This is why optimized
kprobes is dependent on !CONFIG_PREEMPT. In fact, we can now optimize
kprobes on CONFIG_PREEMPT with this utility. Right Masami?
With ftrace, perf and other "dynamic" users (where the ftrace_ops was
created via a kmalloc), would not get the benefit of being called
directly. They all needed to have their mcount/fentry's call a static
trampoline that disabled preemption before calling the callback. This
static trampoline is shared by all, so even if perf was the only
callback for the function, it had to call this trampoline that iterated
through all registered ftrace_ops to see which one had a callback for
the given function.
With this utility, perf not only gets the benefit of not having to use
that static loop trampoline, it can even have its own trampoline
created that doesn't even need to do the check if perf wants this
function or not, as the only way the trampoline is called, is if perf
wanted it.
>
> Current mainline kernel/trace/ftrace.c uses synchronize_rcu_tasks().
> So yes, currently one user.
>
And the kpatch folks want to use it too.
-- Steve
Powered by blists - more mailing lists