lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 27 Jul 2017 21:38:10 -0400 From: Steven Rostedt <rostedt@...dmis.org> To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com> Cc: linux-kernel@...r.kernel.org, mingo@...nel.org, jiangshanlai@...il.com, dipankar@...ibm.com, akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com, josh@...htriplett.org, tglx@...utronix.de, peterz@...radead.org, dhowells@...hat.com, edumazet@...gle.com, fweisbec@...il.com, oleg@...hat.com Subject: Re: [PATCH tip/core/rcu 07/15] rcu: Add event tracing to ->gp_tasks update at GP start On Mon, 24 Jul 2017 14:44:36 -0700 "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com> wrote: > There is currently event tracing to track when a task is preempted > within a preemptible RCU read-side critical section, and also when that > task subsequently reaches its outermost rcu_read_unlock(), but none > indicating when a new grace period starts when that grace period must > wait on pre-existing readers that have been been preempted at least once > since the beginning of their current RCU read-side critical sections. > > This commit therefore adds an event trace at grace-period start in > the case where there are such readers. Note that only the first > reader in the list is traced. > > Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com> > --- > kernel/rcu/tree_plugin.h | 9 ++++++++- > 1 file changed, 8 insertions(+), 1 deletion(-) > > diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h > index 14ba496a13cd..3e3f92e981a1 100644 > --- a/kernel/rcu/tree_plugin.h > +++ b/kernel/rcu/tree_plugin.h > @@ -636,10 +636,17 @@ static int rcu_print_task_exp_stall(struct rcu_node *rnp) > */ > static void rcu_preempt_check_blocked_tasks(struct rcu_node *rnp) > { > + struct task_struct *t; > + > RCU_LOCKDEP_WARN(preemptible(), "rcu_preempt_check_blocked_tasks() invoked with preemption enabled!!!\n"); > WARN_ON_ONCE(rcu_preempt_blocked_readers_cgp(rnp)); > - if (rcu_preempt_has_tasks(rnp)) > + if (rcu_preempt_has_tasks(rnp)) { The only function of this if block is to fill the content of the trace event, correct? What about doing: if (trace_rcu_unlock_preempted_task_enabled() && rcu_preempt_has_tasks(rnp)) { instead? The trace_rcu_unlock_preempted_task_enabled() is a static branch (aka jump_label), which would make the above a constant branch when tracing is not enabled, and would keep this from adding any extra overhead. -- Steve > rnp->gp_tasks = rnp->blkd_tasks.next; > + t = container_of(rnp->gp_tasks, struct task_struct, > + rcu_node_entry); > + trace_rcu_unlock_preempted_task(TPS("rcu_preempt-GPS"), > + rnp->gpnum, t->pid); > + } > WARN_ON_ONCE(rnp->qsmask); > } >
Powered by blists - more mailing lists