lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 17 Oct 2008 12:43:52 -0400
From:	Mathieu Desnoyers <compudj@...stal.dyndns.org>
To:	Jason Baron <jbaron@...hat.com>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	linux-kernel@...r.kernel.org, fche@...hat.com, fweisbec@...il.com,
	edwintorok@...il.com, mingo@...e.hu,
	Steven Rostedt <rostedt@...dmis.org>
Subject: Re: tracepoints for kernel/mutex.c

* Jason Baron (jbaron@...hat.com) wrote:
> On Thu, Oct 16, 2008 at 11:34:38PM +0200, Peter Zijlstra wrote:
> > On Thu, 2008-10-16 at 17:04 -0400, Jason Baron wrote:
> > 
> > > Below are 3 tracepoints I've been playing with in kernel/mutex.c using
> > > a SystemTap script. The idea is to detect and determine the cause of
> > > lock contention. Currently I get the following output:
> > > 
> > > <contended mutex nam> <process name and pid of the contention> <time of
> > > contention> <pid that woke me up(caused the contention?)>
> > 
> > > I think this approach has a number of advantages. It has low
> > > overhead in the off case, since its based on tracepoints. It is
> > > minimally invasive in the code path (3 tracepoints). It also allows me
> > > to explore data structures and parts of the kernel by simply modifying
> > > the SystemTap script. I do not need to re-compile the kernel and reboot.
> > 
> > *sigh* this is why I hate markers and all related things...
> > 
> > _IFF_ you want to place tracepoints, get them in the same place as the
> > lock-dep/stat hooks, that way you get all the locks, not only mutexes.
> 
> makes sense. So we could layer lock-dep/stat on top of tracepoints? That
> would potentially also make lock-dep/stat more dynamic.
> 
> > 
> > This is the same reason I absolutely _hate_ Edwin's rwsem tracer.
> > 
> 
> i'm trying to get some consensus on these types of patches. Do we
> want to create a new tracer for each thing we want to trace, or add
> tracepoints?
> 
> > Folks, lets please start by getting the tracing infrastructure in and
> > those few high-level trace-points google proposed.
> > 
> > Until we get the basics in, I think I'm going to NAK any and all
> > tracepoint/marker patches.
> > 
> 
> I think that core locking functions are pretty basic...
> 
> 

Guys, please, let's focus on the infrastructure to manage trace data
(timestamping, buffering, event ID, event type management) before going
any further in the instrumentation direction. Otherwise we will end up
adding instrumentation in the Linux kernel without any in-kernel user
(or various tiny users without overall management insfrastructure), and
this sounds like a no-go. This is what Peter and Thomas are saying by
talking of NACK. Their position is actually nothing new : Linus has been
the first to say that at the Kernel Summit 2008, and I think it makes
sense.

We will have plenty of time to optimize
markers/tracepoints/lockdep/function trace _once_ we get the data
collection right.

Mathieu

-- 
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 9A68
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ