lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Sat, 19 Sep 2009 10:03:21 +0200
From:	Frederic Weisbecker <fweisbec@...il.com>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	LKML <linux-kernel@...r.kernel.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Li Zefan <lizf@...fujitsu.com>,
	Jason Baron <jbaron@...hat.com>,
	Masami Hiramatsu <mhiramat@...hat.com>
Subject: Re: [PATCH 0/2 v3] tracing: Tracing event profiling updates

On Sat, Sep 19, 2009 at 09:34:00AM +0200, Ingo Molnar wrote:
> 
> * Frederic Weisbecker <fweisbec@...il.com> wrote:
> 
> > 
> > Ingo,
> > 
> > Hopefully this is my last attempt.
> > This new iteration fixes the syscalls events to correctly handle
> > the buffer. In the previous version, they did not care about interrupts.
> > 
> > I only resend the second patch as only this one has changed since the v2.
> > 
> > The new branch is in:
> > git://git.kernel.org/pub/scm/linux/kernel/git/frederic/random-tracing.git
> > 	tracing/core-v3
> > 
> > Thanks,
> > 	Frederic.
> > 
> > Frederic Weisbecker (2):
> >       tracing: Factorize the events profile accounting
> >       tracing: Allocate the ftrace event profile buffer dynamically
> > 
> >  include/linux/ftrace_event.h       |   10 +++-
> >  include/linux/syscalls.h           |   24 +++-----
> >  include/trace/ftrace.h             |  111 ++++++++++++++++++++---------------
> >  kernel/trace/trace_event_profile.c |   79 +++++++++++++++++++++++++-
> >  kernel/trace/trace_syscalls.c      |   97 +++++++++++++++++++++++++------
> >  5 files changed, 234 insertions(+), 87 deletions(-)
> 
> Hm, the naming is quite confusing here i think:
> 
>   -132,8 +133,12 @@ struct ftrace_event_call {
>          atomic_t                profile_count;
>          int                     (*profile_enable)(void);
>          void                    (*profile_disable)(void);
>  +       char                    *profile_buf;
>  +       char                    *profile_buf_nmi;
> 
> These are generic events, not just 'profiling' histograms.
> 
> Generic events can have _many_ output modi:
> 
>  - SVGs                   (perf timeline)
>  - histograms             (perf report)
>  - traces                 (perf trace)
>  - summaries / maximums   (perf sched lat)
>  - maps                   (perf sched map)
>  - graphs                 (perf report --call-graph)
> 
> So it's quite a misnomer to talk just about profiling here. This is an 
> event record buffer.



Agreed, I guess we can call the perf_event_buf/perf_event_nmi.
Also may be the profile_enable/profile_disable should follow the
renaming logic.



> Also, what is the currently maximum possible size of ->profile_buf? The 
> max size of an event record? The new codepath looks a bit heavy with 
> rcu-lock/unlock and other bits put inbetween - and this is now in the 
> event sending critical path. Cannot we do a permanent buffer that needs 
> no extra locking/reference protection?
> 
> Is the whole thing even justified? I mean, we keep the size of records 
> low anyway. It's a _lot_ easier to handle on-stack records, they are the 
> ideal (and very fast) dynamic allocator which is NMI and IRQ safe, etc.
> 
> 	Ingo


The max size of an event is undefined once it uses either
a __dynamic_array() or __string() field. (The latter is a subset
of the former anyway).

Those are very special fields that can handle dynamic size arrays.
That makes such events having an unpredictable size each time they
are triggered.

That said, we are currently using a stack based buffer. That coupled
with the unpredicatble event size must really be fixed. I mean, we don't
want to run out of the stack boundaries once an event get a long string,
once a new large event is added, or once an event is randomly triggered
in a path where the stack is already deeply dug.

I've done this easy stack based as a first shot to support ftrace raw
events by perf, but now this becomes something that needs to be fixed
IMO.

I've made this rcu based thing to avoid wasting the buffer in memory
(for each cpu) while we are not profiling the raw events.

I could drop that and keep these buffers static, but this seems
wasteful wrt memory footprint while profiling/tracing is inactive.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ