lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4B548A9A.1020806@redhat.com>
Date:	Mon, 18 Jan 2010 11:21:46 -0500
From:	Masami Hiramatsu <mhiramat@...hat.com>
To:	Xiao Guangrong <xiaoguangrong@...fujitsu.com>
CC:	Ingo Molnar <mingo@...e.hu>, Peter Zijlstra <peterz@...radead.org>,
	Frederic Weisbecker <fweisbec@...il.com>,
	Paul Mackerras <paulus@...ba.org>,
	Jason Baron <jbaron@...hat.com>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/3] perf_event: cleanup for event profile buffer operation

Xiao Guangrong wrote:
> Introduce ftrace_profile_buf_begin() and ftrace_profile_buf_end() to
> operate event profile buffer, clean up redundant code
> 
> Signed-off-by: Xiao Guangrong <xiaoguangrong@...fujitsu.com>
[...]

> diff --git a/kernel/trace/trace_event_profile.c b/kernel/trace/trace_event_profile.c
> index 9e25573..f0fa16b 100644
> --- a/kernel/trace/trace_event_profile.c
> +++ b/kernel/trace/trace_event_profile.c
> @@ -9,11 +9,8 @@
>  #include "trace.h"
>  
>  
> -char *perf_trace_buf;
> -EXPORT_SYMBOL_GPL(perf_trace_buf);
> -
> -char *perf_trace_buf_nmi;
> -EXPORT_SYMBOL_GPL(perf_trace_buf_nmi);
> +static char *perf_trace_buf;
> +static char *perf_trace_buf_nmi;
>  
>  typedef typeof(char [FTRACE_MAX_PROFILE_SIZE]) perf_trace_t ;
>  
> @@ -120,3 +117,56 @@ void ftrace_profile_disable(int event_id)
>  	}
>  	mutex_unlock(&event_mutex);
>  }
> +
> +void *ftrace_profile_buf_begin(int size, unsigned short type, int *rctxp,
> +			       unsigned long *irq_flags)
> +{
> +	struct trace_entry *entry;
> +	char *trace_buf, *raw_data;
> +	int pc, cpu;
> +
> +	pc = preempt_count();
> +
> +	/* Protect the per cpu buffer, begin the rcu read side */
> +	local_irq_save(*irq_flags);
> +
> +	*rctxp = perf_swevent_get_recursion_context();
> +	if (*rctxp < 0)
> +		goto err_recursion;
> +
> +	cpu = smp_processor_id();
> +
> +	if (in_nmi())
> +		trace_buf = rcu_dereference(perf_trace_buf_nmi);
> +	else
> +		trace_buf = rcu_dereference(perf_trace_buf);
> +
> +	if (!trace_buf)
> +		goto err;
> +
> +	raw_data = per_cpu_ptr(trace_buf, cpu);
> +
> +	/* zero the dead bytes from align to not leak stack to user */
> +	*(u64 *)(&raw_data[size - sizeof(u64)]) = 0ULL;
> +
> +	entry = (struct trace_entry *)raw_data;
> +	tracing_generic_entry_update(entry, *irq_flags, pc);
> +	entry->type = type;
> +
> +	return raw_data;
> +err:
> +	perf_swevent_put_recursion_context(*rctxp);
> +err_recursion:
> +	local_irq_restore(*irq_flags);
> +	return NULL;
> +}
> +
> +void ftrace_profile_buf_end(void *raw_data, int size, int rctx, u64 addr,
> +			    u64 count, unsigned long irq_flags)
> +{
> +	struct trace_entry *entry = raw_data;
> +
> +	perf_tp_event(entry->type, addr, count, raw_data, size);
> +	perf_swevent_put_recursion_context(rctx);
> +	local_irq_restore(irq_flags);
> +}

Hmm, could you make it inline-functions or add __kprobes?
Because it is called from kprobes, we don't want to probe
the function which will be called from kprobes handlers itself.

(IMHO, from the viewpoint of performance, inline-function
 could be better.)

Thank you,

-- 
Masami Hiramatsu

Software Engineer
Hitachi Computer Products (America), Inc.
Software Solutions Division

e-mail: mhiramat@...hat.com

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ