lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20090603153508.GC6640@linux.vnet.ibm.com>
Date:	Wed, 3 Jun 2009 08:35:08 -0700
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Steven Rostedt <rostedt@...dmis.org>
Cc:	Frederic Weisbecker <fweisbec@...il.com>,
	LKML <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...e.hu>,
	Andrew Morton <akpm@...ux-foundation.org>, stable@...nel.org,
	"Luis Claudio R. Goncalves" <lclaudio@...g.org>,
	Oleg Nesterov <oleg@...sign.ru>
Subject: Re: [PATCH 2/3] function-graph: enable the stack after
	initialization of other variables

On Tue, Jun 02, 2009 at 03:30:14PM -0400, Steven Rostedt wrote:
> 
> On Tue, 2 Jun 2009, Frederic Weisbecker wrote:
> > > diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
> > > index d28687e..baeb5fe 100644
> > > --- a/kernel/trace/trace_functions_graph.c
> > > +++ b/kernel/trace/trace_functions_graph.c
> > > @@ -65,6 +65,12 @@ ftrace_push_return_trace(unsigned long ret, unsigned long func, int *depth)
> > >  	if (!current->ret_stack)
> > >  		return -EBUSY;
> > >  
> > > +	/*
> > > +	 * We must make sure the ret_stack is tested before we read
> > > +	 * anything else.
> > > +	 */
> > > +	smp_rmb();
> > 
> > 
> > Isn't this part a too much costly for very traced function?
> 
> I was thinking that, but otherwise we still have the problem. It is a read 
> barrier which I don't think is as costly as a write or full barrier. But 
> because we have the if statement, perhaps a "read_barrier_depends" can do?

Although this would work on some CPU architectures (x86, s390, Power,
and perhaps a few others), it would break on those weakly ordered systems
that do not respect control dependencies.

One approach would be to use another level of indirection.  If this new
pointer is NULL, you return EBUSY.  Make this new pointer reference
the updated fields (tracing_graph_pause, trace_overrun, and so on).
This could point into the same data structure -- it is not necessary to
actually allocate a new chunk of memory.

Then you can use the existing rcu_assign_pointer() and rcu_dereference()
primitives.

Does this approach help at all?

							Thanx, Paul

> [ added Paul McKenney because he's good with barriers ]
> 
> -- Steve
> 
> 
> > 
> > 
> > > +
> > >  	/* The return trace stack is full */
> > >  	if (current->curr_ret_stack == FTRACE_RETFUNC_DEPTH - 1) {
> > >  		atomic_inc(&current->trace_overrun);
> > > -- 
> > > 1.6.3.1
> > > 
> > > -- 
> > 
> > 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ