[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160311134814.GA25533@danjae.kornet>
Date: Fri, 11 Mar 2016 22:48:14 +0900
From: Namhyung Kim <namhyung@...nel.org>
To: Jiri Olsa <jolsa@...hat.com>
Cc: Jiri Olsa <jolsa@...nel.org>, Steven Rostedt <rostedt@...dmis.org>,
lkml <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Arnaldo Carvalho de Melo <acme@...nel.org>
Subject: Re: [PATCH 1/5] ftrace perf: Check sample types only for sampling
events
On Fri, Mar 11, 2016 at 09:36:24AM +0100, Jiri Olsa wrote:
> On Thu, Mar 10, 2016 at 08:25:02AM +0100, Jiri Olsa wrote:
> > On Thu, Mar 10, 2016 at 09:36:37AM +0900, Namhyung Kim wrote:
> > > Hi Jiri,
> > >
> > > On Wed, Mar 09, 2016 at 09:46:41PM +0100, Jiri Olsa wrote:
> > > > Currently we check sample type for ftrace:function event
> > > > even if it's not created as sampling event. That prevents
> > > > creating ftrace_function event in counting mode.
> > > >
> > > > Making sure we check sample types only for sampling events.
> > > >
> > > > Before:
> > > > $ sudo perf stat -e ftrace:function ls
> > > > ...
> > > >
> > > > Performance counter stats for 'ls':
> > > >
> > > > <not supported> ftrace:function
> > > >
> > > > 0.001983662 seconds time elapsed
> > > >
> > > > After:
> > > > $ sudo perf stat -e ftrace:function ls
> > > > ...
> > > >
> > > > Performance counter stats for 'ls':
> > > >
> > > > 44,498 ftrace:function
> > > >
> > > > 0.037534722 seconds time elapsed
> > > >
> > > > Signed-off-by: Jiri Olsa <jolsa@...nel.org>
> > > > ---
> > > > kernel/trace/trace_event_perf.c | 4 ++--
> > > > 1 file changed, 2 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c
> > > > index 00df25fd86ef..a7171ec2c1ca 100644
> > > > --- a/kernel/trace/trace_event_perf.c
> > > > +++ b/kernel/trace/trace_event_perf.c
> > > > @@ -52,14 +52,14 @@ static int perf_trace_event_perm(struct trace_event_call *tp_event,
> > > > * event, due to issues with page faults while tracing page
> > > > * fault handler and its overall trickiness nature.
> > > > */
> > > > - if (!p_event->attr.exclude_callchain_user)
> > > > + if (is_sampling_event(p_event) && !p_event->attr.exclude_callchain_user)
> > > > return -EINVAL;
> > > >
> > > > /*
> > > > * Same reason to disable user stack dump as for user space
> > > > * callchains above.
> > > > */
> > > > - if (p_event->attr.sample_type & PERF_SAMPLE_STACK_USER)
> > > > + if (is_sampling_event(p_event) && p_event->attr.sample_type & PERF_SAMPLE_STACK_USER)
> > > > return -EINVAL;
> > > > }
> > > >
> > >
> > > What about checking is_sampling_event() first and goto the last
> > > paranoid_tracepoint_raw check instead? This way we can remove the
> > > same check in the function trace case.
> >
> > right, will check
>
> hum, did you mean something like this?
>
> I'd rather keep it the original way.. seems more straight
Hmm.. I think I was wrong. But it seems we can simply return 0 for
non sampling case. How about this?
Thanks,
Namhyung
diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c
index 00df25fd86ef..e11108f1d197 100644
--- a/kernel/trace/trace_event_perf.c
+++ b/kernel/trace/trace_event_perf.c
@@ -47,6 +47,9 @@ static int perf_trace_event_perm(struct trace_event_call *tp_event,
if (perf_paranoid_tracepoint_raw() && !capable(CAP_SYS_ADMIN))
return -EPERM;
+ if (!is_sampling_event(p_event))
+ return 0;
+
/*
* We don't allow user space callchains for function trace
* event, due to issues with page faults while tracing page
Powered by blists - more mailing lists