[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200510154908.GR2869@paulmck-ThinkPad-P72>
Date: Sun, 10 May 2020 08:49:08 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Lai Jiangshan <jiangshanlai@...il.com>
Cc: Steven Rostedt <rostedt@...dmis.org>,
Joel Fernandes <joel@...lfernandes.org>,
rcu <rcu@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>,
"kernel-team@...com," <kernel-team@...com>,
Ingo Molnar <mingo@...nel.org>, dipankar <dipankar@...ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Josh Triplett <josh@...htriplett.org>,
Thomas Glexiner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
David Howells <dhowells@...hat.com>,
Eric Dumazet <edumazet@...gle.com>,
Frederic Weisbecker <fweisbec@...il.com>,
Oleg Nesterov <oleg@...hat.com>,
Masami Hiramatsu <mhiramat@...nel.org>
Subject: Re: [PATCH RFC tip/core/rcu 09/16] rcu-tasks: Add an RCU-tasks rude
variant
On Sun, May 10, 2020 at 05:59:27PM +0800, Lai Jiangshan wrote:
> On Tue, Mar 17, 2020 at 6:03 AM Steven Rostedt <rostedt@...dmis.org> wrote:
> >
> > On Mon, 16 Mar 2020 17:45:40 -0400
> > Joel Fernandes <joel@...lfernandes.org> wrote:
> >
> > > >
> > > > Same for the function side (if not even more so). This would require adding
> > > > a srcu_read_lock() to all functions that can be traced! That would be a huge
> > > > kill in performance. Probably to the point no one would bother even using
> > > > function tracer.
> > >
> > > Point well taken! Thanks,
> >
> > Actually, it's worse than that. (We talked about this on IRC but I wanted
> > it documented here too).
> >
> > You can't use any type of locking, unless you insert it around all the
> > callers of the nops (which is unreasonable).
> >
> > That is, we have gcc -pg -mfentry that creates at the start of all traced
> > functions:
> >
> > <some_func>:
> > call __fentry__
> > [code for function here]
> >
> > At boot up (or even by the compiler itself) we convert that to:
> >
> > <some_func>:
> > nop
> > [code for function here]
> >
> >
> > When we want to trace this function we use text_poke (with current kernels)
> > and convert it to this:
> >
> > <some_func>:
> > call trace_trampoline
> > [code for function here]
> >
> >
> > That trace_trampoline can be allocated, which means when its no longer
> > needed, it must be freed. But when do we know it's safe to free it? Here's
> > the issue.
> >
> >
> > <some_func>:
> > call trace_trampoline <- interrupt happens just after the jump
> > [code for function here]
> >
> > Now the task has just executed the call to the trace_trampoline. Which
> > means the instruction pointer is set to the start of the trampoline. But it
> > has yet executed that trampoline.
> >
> > Now if the task is preempted, and a real time hog is keeping it from
> > running for minutes at a time (which is possible!). And in the mean time,
> > we are done with that trampoline and free it. What happens when that task
> > is scheduled back? There's no more trampoline to execute even though its
> > instruction pointer is to execute the first operand on the trampoline!
> >
> > I used the analogy of jumping off the cliff expecting a magic carpet to be
> > there to catch you, and just before you land, it disappears. That would be
> > a very bad day indeed!
> >
> > We have no way to add a grace period between the start of a function (can
> > be *any* function) and the start of the trampoline.
>
> Hello
>
> I think adding a small number of instructions to preempt_schedule_irq()
> is sufficient to create the needed protected region between the start
> of a function and the trampoline body.
>
> preempt_schedule_irq() {
> + if (unlikely(is_trampoline_page(page_of(interrupted_ip)))) {
> + return; // don't do preempt schedule
> +
> + }
> preempt_schedule_irq() original body
> }
>
> // generated on trampoline pages
> trace_trampoline() {
> preempt_disable();
> trace_trampoline body
> jmp preempt_enable_traced(clobbers)
> }
>
> asm(kernel text):
> preempt_enable_traced:
> preempt_enable_notrace();
> restore cobblers
> return(the return ip on the stack is traced_function_start_code)
>
>
> If the number of instructions added in preempt_schedule_irq() and
> the complexity to make trampoline ip detectable(is_trampoline_page(),
> or is_trampoline_range()) are small, and tasks_rcu is rendered useless,
> I think it will be win-win.
It certainly would provide a nice reduction in code size!
This would provide a zero-instructions preempt_disable() at the beginning
of the trampoline and a zero-instructions preempt_enable_no_resched() at
the end, correct? If so, wouldn't this create a potentially long (though
"weak") preempt-disable region extending to the next preempt_enable(),
local_bh_enable(), schedule(), interrupt, transition to userspace,
or similar? This could be quite some time. Note that cond_resched()
wouldn't help, given that this is only in PREEMPT=y kernels.
The "weak" refers to the fact that if a second resched IPI arrived in the
meantime, preemption would then happen. But without that second IPI,
the request for preemption could be ignored for quite some time.
Or am I missing something here?
Thanx, Paul
> Thanks
>
> Lai
>
> > Since the problem is
> > that the task was non-voluntarily preempted before it could execute the
> > trampoline, and that trampolines are not allowed (suppose) to call
> > schedule, then we have our quiescent state to track (voluntary scheduling).
> > When all tasks have either voluntarily scheduled, or entered user space
> > after disconnecting a trampoline from a function, we know that it is safe to
> > free the trampoline.
> >
> > -- Steve
Powered by blists - more mailing lists