lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 11 Nov 2010 08:13:44 +0800
From:	Li Zefan <lizf@...fujitsu.com>
To:	Jiri Olsa <jolsa@...hat.com>
CC:	mingo@...e.hu, rostedt@...dmis.org, andi@...stfloor.org,
	lwoodman@...hat.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] tracing - fix recursive user stack trace

Jiri Olsa wrote:
> The user stack trace can fault when examining the trace. Which
> would call the do_page_fault handler, which would trace again,
> which would do the user stack trace, which would fault and call
> do_page_fault again ...
> 
> Thus this is causing a recursive bug. We need to have a recursion
> detector here.
> 

I guess this is from what I reported to Redhat, triggered by
the ftrace stress test. ;)

This patch should be the first patch, otherwise you introduce
a regression. Though it merely a problem in this case, better
avoid it.

A nitpick below:

> 
> Signed-off-by: Steven Rostedt <srostedt@...hat.com>
> Signed-off-by: Jiri Olsa <jolsa@...hat.com>
> ---
>  kernel/trace/trace.c |   19 +++++++++++++++++++
>  1 files changed, 19 insertions(+), 0 deletions(-)
> 
> diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> index 82d9b81..0215e87 100644
> --- a/kernel/trace/trace.c
> +++ b/kernel/trace/trace.c
> @@ -1284,6 +1284,8 @@ void trace_dump_stack(void)
>  	__ftrace_trace_stack(global_trace.buffer, flags, 3, preempt_count());
>  }
>  
> +static DEFINE_PER_CPU(int, user_stack_count);
> +
>  void
>  ftrace_trace_userstack(struct ring_buffer *buffer, unsigned long flags, int pc)
>  {
> @@ -1302,6 +1304,18 @@ ftrace_trace_userstack(struct ring_buffer *buffer, unsigned long flags, int pc)
>  	if (unlikely(in_nmi()))
>  		return;
>  
> +	/*
> +	 * prevent recursion, since the user stack tracing may
> +	 * trigger other kernel events.
> +	 */
> +	preempt_disable();
> +	if (__get_cpu_var(user_stack_count))
> +		goto out;
> +
> +	__get_cpu_var(user_stack_count)++;
> +
> +
> +

redundant blank lines.

>  	event = trace_buffer_lock_reserve(buffer, TRACE_USER_STACK,
>  					  sizeof(*entry), flags, pc);
>  	if (!event)
> @@ -1319,6 +1333,11 @@ ftrace_trace_userstack(struct ring_buffer *buffer, unsigned long flags, int pc)
>  	save_stack_trace_user(&trace);
>  	if (!filter_check_discard(call, entry, buffer, event))
>  		ring_buffer_unlock_commit(buffer, event);
> +
> +	__get_cpu_var(user_stack_count)--;
> +
> + out:
> +	preempt_enable();
>  }
>  
>  #ifdef UNUSED
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ