lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 17 Oct 2008 18:48:38 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Jason Baron <jbaron@...hat.com>
Cc:	linux-kernel@...r.kernel.org, compudj@...stal.dyndns.org,
	fche@...hat.com, fweisbec@...il.com, edwintorok@...il.com,
	mingo@...e.hu, Steven Rostedt <rostedt@...dmis.org>
Subject: Re: tracepoints for kernel/mutex.c

On Fri, 2008-10-17 at 10:48 -0400, Jason Baron wrote:
> On Thu, Oct 16, 2008 at 11:34:38PM +0200, Peter Zijlstra wrote:
> > On Thu, 2008-10-16 at 17:04 -0400, Jason Baron wrote:
> > 
> > > Below are 3 tracepoints I've been playing with in kernel/mutex.c using
> > > a SystemTap script. The idea is to detect and determine the cause of
> > > lock contention. Currently I get the following output:
> > > 
> > > <contended mutex nam> <process name and pid of the contention> <time of
> > > contention> <pid that woke me up(caused the contention?)>
> > 
> > > I think this approach has a number of advantages. It has low
> > > overhead in the off case, since its based on tracepoints. It is
> > > minimally invasive in the code path (3 tracepoints). It also allows me
> > > to explore data structures and parts of the kernel by simply modifying
> > > the SystemTap script. I do not need to re-compile the kernel and reboot.
> > 
> > *sigh* this is why I hate markers and all related things...
> > 
> > _IFF_ you want to place tracepoints, get them in the same place as the
> > lock-dep/stat hooks, that way you get all the locks, not only mutexes.
> 
> makes sense. So we could layer lock-dep/stat on top of tracepoints? That
> would potentially also make lock-dep/stat more dynamic.

I'm afraid that won't work. Both lockdep and lockstat rely on added data
to the lock structure. But what you can do is expose the hooks as
tracepoints when lockdep/lockstat is configured.

> > 
> > This is the same reason I absolutely _hate_ Edwin's rwsem tracer.
> > 
> 
> i'm trying to get some consensus on these types of patches. Do we
> want to create a new tracer for each thing we want to trace, or add
> tracepoints?

The only thing I'd consider is one lock-tracer that exposes all
lockdep/lockstat hooks. Any half-assed partial solution won't fly.

> > Folks, lets please start by getting the tracing infrastructure in and
> > those few high-level trace-points google proposed.
> > 
> > Until we get the basics in, I think I'm going to NAK any and all
> > tracepoint/marker patches.
> > 
> 
> I think that core locking functions are pretty basic...

For kernel developers, yes. For userspace stuff like latencytop should
be good enough to notice something is up.

And kernel developers can recompile their kernel - that's the only way
you're going to do anything about lock contention anyway.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ