lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 25 Jun 2009 08:55:51 -0400 (EDT)
From:	Steven Rostedt <rostedt@...dmis.org>
To:	Ben Gamari <bgamari.foss@...il.com>
cc:	linux-kernel@...r.kernel.org,
	"Stone, Joshua I" <joshua.i.stone@...el.com>,
	Rober Richter <robert.richter@....com>,
	anil.s.keshavamurthy@...el.com, ananth@...ibm.com,
	davem@...emloft.net, mhiramat@...hat.com,
	SystemTap <systemtap@...rces.redhat.com>,
	Eric Anholt <eric@...olt.net>,
	Chris Wilson <chris@...is-wilson.co.uk>,
	intel-gfx@...ts.freedesktop.org
Subject: Re: Infrastructure for tracking driver performance events



On Wed, 24 Jun 2009, Ben Gamari wrote:
> 
> I am investigating how this might be accomplished with existing kernel
> infrastructure. At first, ftrace looked like a promising option, as the
> sysprof profiler is driven by ftrace and provides exactly the type of
> full system backtraces we need. We could probably even accomplish an
> approximation of our desired result by calling a function when we begin
> and another when we end waiting and using a script to look for these
> events. I haven't looked into how we could get a usermode trace with
> this approach, but it seems possible as sysprof already does it.
> 
> While this approach would work, it has a few shortcomings:
> 1) Function graph tracing must be enabled on the entire machine to debug
>    stalls

You can filter on functions to trace. Or add a list of functions
in set_graph_function to just graph a specific list.

> 2) It is difficult to extract the kernel mode callgraph with no natural
>    way to capture the usermode callgraph

Do you just need a backtrace of some point, or a full user mode graph?

> 3) A large amount of usermode support is necessary (which will likely be
>    the case for any option; listed here for completeness)
> 
> Another option seems to be systemtap. It has already been documented[3]
> that this option could provide both user-mode and kernel-mode
> backtraces. The driver could provide a kernel marker at every potential
> wait point (or a single marker in a function called at each wait point,
> for that matter) which would be picked up by systemtap and processed in
> usermode, calling ptrace to acquire a usermode backtrace. This approach
> seems slightly cleaner as it doesn't require the tracing on the entire
> machine to catch what should be reasonably rare events (hopefully).

Enabling the userstacktrace will give userspace stack traces at event
trace points. The thing is that the userspace utility must be built with 
frame pointers.

-- Steve

> 
> Unfortunately, the systemtap approach described in [3] requires that
> each process have an associated "driver" process to get a usermode
> backtrace. It would be nice to avoid this requirement as there are
> generally far more gpu clients than just the X server (i.e. direct
> rendering clients) and tracking them all could get tricky.
> 
> These are the two options I have seen thusfar. It seems like getting
> this sort of information will be increasingly important as more and more
> drivers move into kernel-space and it is likely that the intel
> implementation will be a model for future drivers, so it would be nice
> to implement it correctly the first time. Does anyone see an option
> which I have missed?  Are there any thoughts on any new generic services
> that the kernel might provide that might make this task easier? Any
> comments, questions, or complaints would be greatly appreciated.
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ